{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 图像分类\n",
    "\n",
    "是根据图像的语义信息对不同类别图像进行区分，是计算机视觉的核心，\n",
    "是\n",
    "1. \t物体检测、\n",
    "1.  图像分割、\n",
    "1.  物体跟踪、\n",
    "1.  行为分析、\n",
    "1.  人脸识别\n",
    "等其他高层次视觉任务的基础。\n",
    "\n",
    "## 基本流程\n",
    "\n",
    "1.先使用卷积神经网络提取图像特征\n",
    "\n",
    "2.再用这些特征预测分类概率\n",
    "\n",
    "3.根据训练样本标签建立起分类损失函数\n",
    "\n",
    "4.开启端到端的训练\n",
    "\n",
    "\n",
    "\n",
    "##  数据集iChallenge-PM，\n",
    "\n",
    ">\t本节将基于眼疾分类数据集iChallenge-PM，用如下卷积神经网络模块解决图像分类问题。\n",
    "\n",
    "\n",
    "- LeNet：在手写数字识别任务上取得了巨大成功。\n",
    "\n",
    "\n",
    "- AlexNet：用在大尺寸图片数据集ImageNet上，获得了2012年ImageNet比赛冠军\n",
    "\n",
    "\n",
    "- VGG：当前最流行的卷积神经网络之一，优点结构简单、应用性极强。\n",
    "\n",
    "\n",
    "- GoogLeNet：取得了2014年ImageNet比赛冠军。\n",
    "\n",
    "\n",
    "- ResNet：通过引入残差模块加深网络层数，在ImagNet数据集上的错误率降低到3.6%。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# LeNet\n",
    "\n",
    "LeNet是最早的卷积神经网络之一，LeNet通过连续使用卷积和池化层的组合提取图像特征。\n",
    "\n",
    "其架构如 **图1** 所示，这里展示的是作者论文中的LeNet-5模型：\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/33bbff96924e4b36b613f0c1c36a89dfb72e3b56b3be464dbbce22f7ce575b0d\" width = \"800\"></center>\n",
    "<center><br>图1：LeNet模型网络结构示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "* 第一模块：包含5×5的6通道卷积和2×2的池化。卷积提取图像中包含的特征模式（激活函数使用sigmoid），图像尺寸从32减小到28。经过池化层可以降低输出特征图对空间位置的敏感性，图像尺寸减到14。\n",
    "\n",
    "* 第二模块：和第一模块尺寸相同，通道数由6增加为16。卷积操作使图像尺寸减小到10，经过池化后变成5。\n",
    "\n",
    "* 第三模块：包含5×5的120通道卷积。卷积之后的图像尺寸减小到1，但是通道数增加为120。将经过第3次卷积提取到的特征图输入到全连接层。第一个全连接层的输出神经元的个数是64，第二个全连接层的输出神经元个数是分类标签的类别数，对于手写数字识别其大小是10。然后使用Softmax激活函数即可计算出每个类别的预测概率。\n",
    "\n",
    "------\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## LeNet   -> 眼疾识别数据集iChallenge-PM上的应用\n",
    "\n",
    "\n",
    "\n",
    "------\n",
    "**说明：**\n",
    "\n",
    "中山大学中山眼科中心联合举办的iChallenge比赛中，提供的关于病理性近视（Pathologic Myopia，PM）的医疗类数据集，\n",
    "\n",
    "    包含1200个受试者的眼底视网膜图片\n",
    "    \n",
    "    训练、验证和测试数据集各400张\n",
    "\n",
    "数据可以从AIStudio[下载](https://aistudio.baidu.com/aistudio/datasetdetail/19065)\n",
    "\n",
    "------\n",
    "\n",
    "### 数据集图片说明\n",
    "\n",
    "\n",
    "#####  iChallenge-PM中既有病理性近视患者的眼底图片，也有非病理性近视患者的图片，命名规则如下：\n",
    "\n",
    "- 病理性近视（PM）：文件名以P开头\n",
    "\n",
    "- 非病理性近视（non-PM）：\n",
    "\n",
    "  * 高度近视（high myopia）：文件名以H开头\n",
    "  \n",
    "  * 正常眼睛（normal）：文件名以N开头\n",
    "\n",
    ".csv数据中 ：Label标签下 \n",
    "\n",
    "\t标签为1：病理性患者的图片，作为正样本。\n",
    "\t标签为0：非病理性患者的图片，作为负样本。\n",
    "    \n",
    "\n",
    "\n",
    "### 项目案例开始前的准备工作\n",
    "#### 数据集准备\n",
    "\n",
    "/home/aistudio/data/data19065 目录包括如下三个文件，解压缩后存放在/home/aistudio/work/palm目录下。\n",
    "- training.zip：包含训练中的图片和标签\n",
    "- validation.zip：包含验证集的图片\n",
    "- valid_gt.zip：包含验证集的标签\n",
    "\n",
    "------\n",
    "**注意**：\n",
    "\n",
    "valid_gt.zip文件解压缩之后，需要将/home/aistudio/work/palm/PALM-Validation-GT/目录下的PM_Label_and_Fovea_Location.xlsx文件转存成csv格式，本节代码示例中已经提前转成文件labels.csv。\n",
    "\n",
    "------\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/aistudio/work/palm/PALM-Training400\n"
     ]
    }
   ],
   "source": [
    "# 初次运行时将注释取消，以便解压文件\n",
    "# 如果已经解压过了，则不需要运行此段代码，否则文件已经存在解压会报错\n",
    "# !unzip -o -q -d /home/aistudio/work/palm /home/aistudio/data/data19065/training.zip\n",
    "# %cd /home/aistudio/work/palm/PALM-Training400/\n",
    "# !unzip -o -q PALM-Training400.zip\n",
    "# !unzip -o -q -d /home/aistudio/work/palm /home/aistudio/data/data19065/validation.zip\n",
    "# !unzip -o -q -d /home/aistudio/work/palm /home/aistudio/data/data19065/valid_gt.zip\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "ename": "FileNotFoundError",
     "evalue": "[Errno 2] No such file or directory: '/home/aistudio/work/palm/PALM-Training400/PALM-Training400/N0012.jpg'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mFileNotFoundError\u001b[0m                         Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-1-436f1f52a9d4>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m     13\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     14\u001b[0m \u001b[0;31m# 读取图片\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 15\u001b[0;31m \u001b[0mimg1\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mImage\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mos\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjoin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mDATADIR\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfile1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     16\u001b[0m \u001b[0mimg1\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mimg1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     17\u001b[0m \u001b[0mimg2\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mImage\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mos\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjoin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mDATADIR\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfile2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/PIL/Image.py\u001b[0m in \u001b[0;36mopen\u001b[0;34m(fp, mode)\u001b[0m\n\u001b[1;32m   2876\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   2877\u001b[0m     \u001b[0;32mif\u001b[0m \u001b[0mfilename\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2878\u001b[0;31m         \u001b[0mfp\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbuiltins\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfilename\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"rb\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   2879\u001b[0m         \u001b[0mexclusive_fp\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   2880\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mFileNotFoundError\u001b[0m: [Errno 2] No such file or directory: '/home/aistudio/work/palm/PALM-Training400/PALM-Training400/N0012.jpg'"
     ]
    }
   ],
   "source": [
    "# 从数据集中选取两张图片，通过LeNet提取特征，构建分类器，对正负样本进行分类，并将图片显示出来。\n",
    "\n",
    "import os\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "from PIL import Image\n",
    "\n",
    "DATADIR = '/home/aistudio/work/palm/PALM-Training400/PALM-Training400'\n",
    "# 文件名以N开头的是正常眼底图片，以P开头的是病变眼底图片\n",
    "file1 = 'N0012.jpg'\n",
    "file2 = 'P0095.jpg'\n",
    "\n",
    "# 读取图片\n",
    "img1 = Image.open(os.path.join(DATADIR, file1))\n",
    "img1 = np.array(img1)\n",
    "img2 = Image.open(os.path.join(DATADIR, file2))\n",
    "img2 = np.array(img2)\n",
    "\n",
    "# 预览读取的图片\n",
    "plt.figure(figsize=(16, 8))\n",
    "f = plt.subplot(121)\n",
    "f.set_title('Normal', fontsize=20)\n",
    "plt.imshow(img1)\n",
    "f = plt.subplot(122)\n",
    "f.set_title('PM', fontsize=20)\n",
    "plt.imshow(img2)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((2056, 2124, 3), (2056, 2124, 3))"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看图片形状\n",
    "img1.shape, img2.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## 处理数据\n",
    "\n",
    "使用OpenCV从磁盘读入图片，将每张图缩放到$224\\times224$大小，并且将像素值调整到$[-1, 1]$之间："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "import cv2\n",
    "import random\n",
    "import numpy as np\n",
    "\n",
    "# 对读入的图像数据进行预处理\n",
    "def transform_img(img):\n",
    "    # 将图片尺寸缩放道 224x224\n",
    "    img = cv2.resize(img, (224, 224))\n",
    "    # 读入的图像数据格式是[H, W, C]\n",
    "    # 使用转置操作将其变成[C, H, W]\n",
    "    img = np.transpose(img, (2,0,1))\n",
    "    img = img.astype('float32')\n",
    "    # 将数据范围调整到[-1.0, 1.0]之间\n",
    "    img = img / 255.\n",
    "    img = img * 2.0 - 1.0\n",
    "    return img\n",
    "\n",
    "#> 定义训练集数据读取器\n",
    "#@ datadir 文件夹名字\n",
    "#@ batch_size ： 每个批次里放几张图片\n",
    "#@ mode ： 训练模式/测试模式\n",
    "def data_loader(datadir, batch_size=10, mode = 'train'):\n",
    "    # 将datadir目录下的文件列出来，每条文件都要读入\n",
    "    filenames = os.listdir(datadir)\n",
    "    def reader():\n",
    "        if mode == 'train':\n",
    "            #> 训练时随机打乱数据顺序\n",
    "            random.shuffle(filenames)\n",
    "        \n",
    "        batch_imgs = []\n",
    "        batch_labels = []\n",
    "        for name in filenames:\n",
    "            \n",
    "            #> 获得文件路径 \n",
    "            filepath = os.path.join(datadir, name)\n",
    "            img = cv2.imread(filepath)\n",
    "            #> 根据transform_img函数预处理图片 \n",
    "            img = transform_img(img)\n",
    "            \n",
    "            #> H开头的文件名表示高度近似，N开头的文件名表示正常视力\n",
    "            if name[0] == 'H' or name[0] == 'N':\n",
    "                \n",
    "                # 高度近视和正常视力的样本，都不是病理性的，属于负样本，标签为0\n",
    "                label = 0\n",
    "            elif name[0] == 'P':\n",
    "                \n",
    "                # P开头的是病理性近视，属于正样本，标签为1\n",
    "                label = 1\n",
    "            else:\n",
    "                raise('Not excepted file name')\n",
    "\n",
    "            # 每读取一个样本的数据，就将其放入数据列表中\n",
    "            batch_imgs.append(img)\n",
    "            batch_labels.append(label)\n",
    "            if len(batch_imgs) == batch_size:\n",
    "                # 当数据列表的长度等于batch_size的时候，\n",
    "                # 把这些数据当作一个mini-batch，并作为数据生成器的一个输出\n",
    "                imgs_array = np.array(batch_imgs).astype('float32')\n",
    "                labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)\n",
    "                yield imgs_array, labels_array\n",
    "                batch_imgs = []\n",
    "                batch_labels = []\n",
    "\n",
    "        if len(batch_imgs) > 0:\n",
    "            # 剩余样本数目不足一个batch_size的数据，一起打包成一个mini-batch\n",
    "            imgs_array = np.array(batch_imgs).astype('float32')\n",
    "            labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)\n",
    "            yield imgs_array, labels_array\n",
    "\n",
    "    return reader\n",
    "\n",
    "# 定义验证集数据读取器\n",
    "def valid_data_loader(datadir, csvfile, batch_size=10, mode='valid'):\n",
    "    # 训练集读取时通过文件名来确定样本标签，验证集则通过csvfile来读取每个图片对应的标签\n",
    "    # 请查看解压后的验证集标签数据，观察csvfile文件里面所包含的内容\n",
    "    # csvfile文件所包含的内容格式如下，每一行代表一个样本，\n",
    "    # 其中第一列是图片id，第二列是文件名，第三列是图片标签，\n",
    "    # 第四列和第五列是Fovea的坐标，与分类任务无关\n",
    "    # ID,imgName,Label,Fovea_X,Fovea_Y\n",
    "    # 1,V0001.jpg,0,1157.74,1019.87\n",
    "    # 2,V0002.jpg,1,1285.82,1080.47\n",
    "    # 打开包含验证集标签的csvfile，并读入其中的内容\n",
    "    filelists = open(csvfile).readlines()\n",
    "    def reader():\n",
    "        batch_imgs = []\n",
    "        batch_labels = []\n",
    "        for line in filelists[1:]:\n",
    "            line = line.strip().split(',')\n",
    "            name = line[1]\n",
    "            label = int(line[2])\n",
    "            # 根据图片文件名加载图片，并对图像数据作预处理\n",
    "            filepath = os.path.join(datadir, name)\n",
    "            img = cv2.imread(filepath)\n",
    "            img = transform_img(img)\n",
    "            # 每读取一个样本的数据，就将其放入数据列表中\n",
    "            batch_imgs.append(img)\n",
    "            batch_labels.append(label)\n",
    "            if len(batch_imgs) == batch_size:\n",
    "                # 当数据列表的长度等于batch_size的时候，\n",
    "                # 把这些数据当作一个mini-batch，并作为数据生成器的一个输出\n",
    "                imgs_array = np.array(batch_imgs).astype('float32')\n",
    "                labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)\n",
    "                yield imgs_array, labels_array\n",
    "                batch_imgs = []\n",
    "                batch_labels = []\n",
    "\n",
    "        if len(batch_imgs) > 0:\n",
    "            # 剩余样本数目不足一个batch_size的数据，一起打包成一个mini-batch\n",
    "            imgs_array = np.array(batch_imgs).astype('float32')\n",
    "            labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)\n",
    "            yield imgs_array, labels_array\n",
    "\n",
    "    return reader\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((10, 3, 224, 224), (10, 1))"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看数据形状\n",
    "DATADIR = '/home/aistudio/work/palm/PALM-Training400/PALM-Training400'\n",
    "train_loader = data_loader(DATADIR, \n",
    "                           batch_size=10, mode='train')\n",
    "data_reader = train_loader()\n",
    "data = next(data_reader)\n",
    "data[0].shape, data[1].shape\n",
    "\n",
    "eval_loader = data_loader(DATADIR, \n",
    "                           batch_size=10, mode='eval')\n",
    "data_reader = eval_loader()\n",
    "data = next(data_reader)\n",
    "data[0].shape, data[1].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 启动训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "start evaluation .......\n",
      "loss=0.6912223264575005, acc=0.4675000049173832\n"
     ]
    }
   ],
   "source": [
    "# -*- coding: utf-8 -*-\n",
    "\n",
    "# LeNet 识别眼疾图片\n",
    "\n",
    "import os\n",
    "import random\n",
    "import paddle\n",
    "import paddle.fluid as fluid\n",
    "import numpy as np\n",
    "\n",
    "DATADIR = '/home/aistudio/work/palm/PALM-Training400/PALM-Training400'\n",
    "DATADIR2 = '/home/aistudio/work/palm/PALM-Validation400'\n",
    "CSVFILE = '/home/aistudio/labels.csv'\n",
    "\n",
    "# 定义训练过程\n",
    "def train(model):\n",
    "    with fluid.dygraph.guard():\n",
    "        print('start training ... ')\n",
    "        model.train()\n",
    "        epoch_num = 5\n",
    "        # 定义优化器\n",
    "        opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9, parameter_list=model.parameters())\n",
    "        # 定义数据读取器，训练数据读取器和验证数据读取器\n",
    "        train_loader = data_loader(DATADIR, batch_size=10, mode='train')\n",
    "        valid_loader = valid_data_loader(DATADIR2, CSVFILE)\n",
    "        for epoch in range(epoch_num):\n",
    "            for batch_id, data in enumerate(train_loader()):\n",
    "                x_data, y_data = data\n",
    "                img = fluid.dygraph.to_variable(x_data)\n",
    "                label = fluid.dygraph.to_variable(y_data)\n",
    "                # 运行模型前向计算，得到预测值\n",
    "                logits = model(img)\n",
    "                # 进行loss计算\n",
    "                loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)\n",
    "                avg_loss = fluid.layers.mean(loss)\n",
    "\n",
    "                if batch_id % 10 == 0:\n",
    "                    print(\"epoch: {}, batch_id: {}, loss is: {}\".format(epoch, batch_id, avg_loss.numpy()))\n",
    "                # 反向传播，更新权重，清除梯度\n",
    "                avg_loss.backward()\n",
    "                opt.minimize(avg_loss)\n",
    "                model.clear_gradients()\n",
    "\n",
    "            model.eval()\n",
    "            accuracies = []\n",
    "            losses = []\n",
    "            for batch_id, data in enumerate(valid_loader()):\n",
    "                x_data, y_data = data\n",
    "                img = fluid.dygraph.to_variable(x_data)\n",
    "                label = fluid.dygraph.to_variable(y_data)\n",
    "                # 运行模型前向计算，得到预测值\n",
    "                logits = model(img)\n",
    "                # 二分类，sigmoid计算后的结果以0.5为阈值分两个类别\n",
    "                # 计算sigmoid后的预测概率，进行loss计算\n",
    "                pred = fluid.layers.sigmoid(logits)\n",
    "                loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)\n",
    "                # 计算预测概率小于0.5的类别\n",
    "                pred2 = pred * (-1.0) + 1.0\n",
    "                # 得到两个类别的预测概率，并沿第一个维度级联\n",
    "                pred = fluid.layers.concat([pred2, pred], axis=1)\n",
    "                acc = fluid.layers.accuracy(pred, fluid.layers.cast(label, dtype='int64'))\n",
    "                accuracies.append(acc.numpy())\n",
    "                losses.append(loss.numpy())\n",
    "            print(\"[validation] accuracy/loss: {}/{}\".format(np.mean(accuracies), np.mean(losses)))\n",
    "            model.train()\n",
    "\n",
    "        # save params of model\n",
    "        fluid.save_dygraph(model.state_dict(), 'palm')\n",
    "        # save optimizer state\n",
    "        fluid.save_dygraph(opt.state_dict(), 'palm')\n",
    "\n",
    "\n",
    "# 定义评估过程\n",
    "def evaluation(model, params_file_path):\n",
    "    with fluid.dygraph.guard():\n",
    "        print('start evaluation .......')\n",
    "        #加载模型参数\n",
    "        model_state_dict, _ = fluid.load_dygraph(params_file_path)\n",
    "        model.load_dict(model_state_dict)\n",
    "\n",
    "        model.eval()\n",
    "        eval_loader = data_loader(DATADIR, \n",
    "                           batch_size=10, mode='eval')\n",
    "\n",
    "        acc_set = []\n",
    "        avg_loss_set = []\n",
    "        for batch_id, data in enumerate(eval_loader()):\n",
    "            x_data, y_data = data\n",
    "            img = fluid.dygraph.to_variable(x_data)\n",
    "            label = fluid.dygraph.to_variable(y_data)\n",
    "            y_data = y_data.astype(np.int64)\n",
    "            label_64 = fluid.dygraph.to_variable(y_data)\n",
    "            # 计算预测和精度\n",
    "            prediction, acc = model(img, label_64)\n",
    "            # 计算损失函数值\n",
    "            loss = fluid.layers.sigmoid_cross_entropy_with_logits(prediction, label)\n",
    "            avg_loss = fluid.layers.mean(loss)\n",
    "            acc_set.append(float(acc.numpy()))\n",
    "            avg_loss_set.append(float(avg_loss.numpy()))\n",
    "        # 求平均精度\n",
    "        acc_val_mean = np.array(acc_set).mean()\n",
    "        avg_loss_val_mean = np.array(avg_loss_set).mean()\n",
    "\n",
    "        print('loss={}, acc={}'.format(avg_loss_val_mean, acc_val_mean))\n",
    "\n",
    "# 导入需要的包\n",
    "import paddle\n",
    "import paddle.fluid as fluid\n",
    "import numpy as np\n",
    "from paddle.fluid.dygraph.nn import Conv2D, Pool2D, Linear\n",
    "\n",
    "# 定义 LeNet 网络结构\n",
    "class LeNet(fluid.dygraph.Layer):\n",
    "    def __init__(self, num_classes=1):\n",
    "        super(LeNet, self).__init__()\n",
    "\n",
    "        # 创建卷积和池化层块，每个卷积层使用Sigmoid激活函数，后面跟着一个2x2的池化\n",
    "        self.conv1 = Conv2D(num_channels=3, num_filters=6, filter_size=5, act='sigmoid')\n",
    "        self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')\n",
    "        self.conv2 = Conv2D(num_channels=6, num_filters=16, filter_size=5, act='sigmoid')\n",
    "        self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')\n",
    "        # 创建第3个卷积层\n",
    "        self.conv3 = Conv2D(num_channels=16, num_filters=120, filter_size=4, act='sigmoid')\n",
    "        # 创建全连接层，第一个全连接层的输出神经元个数为64， 第二个全连接层输出神经元个数为分类标签的类别数\n",
    "        self.fc1 = Linear(input_dim=300000, output_dim=64, act='sigmoid')\n",
    "        self.fc2 = Linear(input_dim=64, output_dim=num_classes)\n",
    "    # 网络的前向计算过程\n",
    "    def forward(self, x, label=None):\n",
    "        x = self.conv1(x)\n",
    "        x = self.pool1(x)\n",
    "        x = self.conv2(x)\n",
    "        x = self.pool2(x)\n",
    "        x = self.conv3(x)\n",
    "        x = fluid.layers.reshape(x, [x.shape[0], -1])\n",
    "        x = self.fc1(x)\n",
    "        x = self.fc2(x)\n",
    "        if label is not None:\n",
    "            acc = fluid.layers.accuracy(input=x, label=label)\n",
    "            return x, acc\n",
    "        else:\n",
    "            return x\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    # 创建模型\n",
    "    with fluid.dygraph.guard():\n",
    "        model = LeNet(num_classes=1)\n",
    "\n",
    "    train(model)\n",
    "    # evaluation(model, params_file_path=\"palm\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过运行结果可以看出，在眼疾筛查数据集iChallenge-PM上，LeNet的loss很难下降，模型没有收敛。这是因为MNIST数据集的图片尺寸比较小（$28\\times28$），但是眼疾筛查数据集图片尺寸比较大（原始图片尺寸约为$2000 \\times 2000$，经过缩放之后变成$224 \\times 224$），LeNet模型很难进行有效分类。这说明在图片尺寸比较大时，LeNet在图像分类任务上存在局限性。\n",
    "\n",
    "-----"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# AlexNet\n",
    "\n",
    "\n",
    "LeNet在手写数字识别数据集上的效果，并没有在其他方面又同样的效果。\n",
    "\n",
    "\n",
    "AlexNet与LeNet相比，具有更深的网络结构，包含5层卷积和3层全连接，同时使用了如下三种方法改进模型的训练过程：\n",
    "\n",
    "\n",
    "- **数据增广**：深度学习中常用的一种处理方式，通过对训练随机加一些变化，比如平移、缩放、裁剪、旋转、翻转或者增减亮度等，产生一系列跟原始图片相似但又不完全相同的样本，从而扩大训练数据集。通过这种方式，可以随机改变训练样本，避免模型过度依赖于某些属性，能从一定程度上**抑制过拟合。**\n",
    "  \n",
    "\n",
    "- 使用Dropout抑制过拟合\n",
    "  \n",
    "\n",
    "- 使用ReLU激活函数减少梯度消失现象\n",
    "  \n",
    "\n",
    "\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/630059b01a9a4e8c8eded2e7584412daa27bc7c034a8441fabadd713dac29d77\" width = \"1000\"></center>\n",
    "<center><br>图2：AlexNet模型网络结构示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "\n",
    "------\n",
    "**说明：**\n",
    "\n",
    "下一节详细介绍数据增广的具体实现方式。\n",
    "\n",
    "-------\n",
    "\n",
    "\n",
    "#### AlexNet在眼疾筛查数据集iChallenge-PM上具体实现的代码如下所示：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# -*- coding:utf-8 -*-\n",
    "\n",
    "# 导入需要的包\n",
    "import paddle\n",
    "import paddle.fluid as fluid\n",
    "import numpy as np\n",
    "from paddle.fluid.dygraph.nn import Conv2D, Pool2D, Linear\n",
    "\n",
    "\n",
    "# 定义 AlexNet 网络结构\n",
    "class AlexNet(fluid.dygraph.Layer):\n",
    "    def __init__(self, num_classes=1):\n",
    "        super(AlexNet, self).__init__()\n",
    "        \n",
    "        # AlexNet与LeNet一样也会同时使用卷积和池化层提取图像特征\n",
    "        # 与LeNet不同的是激活函数换成了‘relu’\n",
    "        self.conv1 = Conv2D(num_channels=3, num_filters=96, filter_size=11, stride=4, padding=5, act='relu')\n",
    "        self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')\n",
    "        self.conv2 = Conv2D(num_channels=96, num_filters=256, filter_size=5, stride=1, padding=2, act='relu')\n",
    "        self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')\n",
    "        self.conv3 = Conv2D(num_channels=256, num_filters=384, filter_size=3, stride=1, padding=1, act='relu')\n",
    "        self.conv4 = Conv2D(num_channels=384, num_filters=384, filter_size=3, stride=1, padding=1, act='relu')\n",
    "        self.conv5 = Conv2D(num_channels=384, num_filters=256, filter_size=3, stride=1, padding=1, act='relu')\n",
    "        self.pool5 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')\n",
    "\n",
    "        self.fc1 = Linear(input_dim=12544, output_dim=4096, act='relu')\n",
    "        self.drop_ratio1 = 0.5\n",
    "        self.fc2 = Linear(input_dim=4096, output_dim=4096, act='relu')\n",
    "        self.drop_ratio2 = 0.5\n",
    "        self.fc3 = Linear(input_dim=4096, output_dim=num_classes)\n",
    "\n",
    "        \n",
    "    def forward(self, x):\n",
    "        x = self.conv1(x)\n",
    "        x = self.pool1(x)\n",
    "        x = self.conv2(x)\n",
    "        x = self.pool2(x)\n",
    "        x = self.conv3(x)\n",
    "        x = self.conv4(x)\n",
    "        x = self.conv5(x)\n",
    "        x = self.pool5(x)\n",
    "        x = fluid.layers.reshape(x, [x.shape[0], -1])\n",
    "        x = self.fc1(x)\n",
    "        # 在全连接之后使用dropout抑制过拟合\n",
    "        x= fluid.layers.dropout(x, self.drop_ratio1)\n",
    "        x = self.fc2(x)\n",
    "        # 在全连接之后使用dropout抑制过拟合\n",
    "        x = fluid.layers.dropout(x, self.drop_ratio2)\n",
    "        x = self.fc3(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with fluid.dygraph.guard():\n",
    "    model = AlexNet()\n",
    "\n",
    "train(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过运行结果可以发现，在眼疾筛查数据集iChallenge-PM上使用AlexNet，loss能有效下降，经过5个epoch的训练，在验证集上的准确率可以达到94%左右。\n",
    "\n",
    "----\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# VGG\n",
    "\n",
    "- VGG是当前最流行的CNN模型之一，\n",
    "\n",
    "- VGG通过使用一系列大小为3x3的小尺寸卷积核和pooling层构造深度卷积神经网络，并取得了较好的效果。\n",
    "\n",
    "\n",
    "**图3** 是VGG-16的网络结构示意图\n",
    "\n",
    "1.有13层卷积和3层全连接层。\n",
    "\n",
    "2.VGG网络的设计严格使用$3\\times 3$的卷积层和池化层来提取特征，并在网络的最后面使用三层全连接层，将最后一层全连接层的输出作为分类的预测。\n",
    "\n",
    "3.在VGG中每层卷积将使用ReLU作为激活函数，在**全连接层之后**添加dropout来抑制过拟合。\n",
    "\n",
    "4.使用小的卷积核能够有效地减少参数的个数，使得训练和测试变得更加有效。比如使用两层$3\\times 3$卷积层，可以得到感受野为5的特征图。\n",
    "\n",
    "> VGG模型的成功证明了增加网络的深度，可以更好的学习图像中的特征模式。\n",
    "\n",
    "\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/657026651e084b639e011fe1fd4ba0bed502807ffa764fceb4796b9ee2a8736b\" width = \"1000\"></center>\n",
    "<center><br>图3：VGG模型网络结构示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "VGG在眼疾识别数据集iChallenge-PM上的具体实现如下代码所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# -*- coding:utf-8 -*-\n",
    "\n",
    "# VGG模型代码\n",
    "import numpy as np\n",
    "import paddle\n",
    "import paddle.fluid as fluid\n",
    "from paddle.fluid.layer_helper import LayerHelper\n",
    "from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, Linear\n",
    "from paddle.fluid.dygraph.base import to_variable\n",
    "\n",
    "# 定义vgg块，包含多层卷积和1层2x2的最大池化层\n",
    "class vgg_block(fluid.dygraph.Layer):\n",
    "    def __init__(self, num_convs, in_channels, out_channels):\n",
    "        \"\"\"\n",
    "        num_convs, 卷积层的数目\n",
    "        num_channels, 卷积层的输出通道数，在同一个Incepition块内，卷积层输出通道数是一样的\n",
    "        \"\"\"\n",
    "        super(vgg_block, self).__init__()\n",
    "        self.conv_list = []\n",
    "        for i in range(num_convs):\n",
    "            conv_layer = self.add_sublayer('conv_' + str(i), Conv2D(num_channels=in_channels, \n",
    "                                        num_filters=out_channels, filter_size=3, padding=1, act='relu'))\n",
    "            self.conv_list.append(conv_layer)\n",
    "            in_channels = out_channels\n",
    "        self.pool = Pool2D(pool_stride=2, pool_size = 2, pool_type='max')\n",
    "    def forward(self, x):\n",
    "        for item in self.conv_list:\n",
    "            x = item(x)\n",
    "        return self.pool(x)\n",
    "\n",
    "class VGG(fluid.dygraph.Layer):\n",
    "    def __init__(self, conv_arch=((2, 64), \n",
    "                                (2, 128), (3, 256), (3, 512), (3, 512))):\n",
    "        super(VGG, self).__init__()\n",
    "        self.vgg_blocks=[]\n",
    "        iter_id = 0\n",
    "        # 添加vgg_block\n",
    "        # 这里一共5个vgg_block，每个block里面的卷积层数目和输出通道数由conv_arch指定\n",
    "        in_channels = [3, 64, 128, 256, 512, 512]\n",
    "        for (num_convs, num_channels) in conv_arch:\n",
    "            block = self.add_sublayer('block_' + str(iter_id), \n",
    "                    vgg_block(num_convs, in_channels=in_channels[iter_id], \n",
    "                              out_channels=num_channels))\n",
    "            self.vgg_blocks.append(block)\n",
    "            iter_id += 1\n",
    "        self.fc1 = Linear(input_dim=512*7*7, output_dim=4096,\n",
    "                      act='relu')\n",
    "        self.drop1_ratio = 0.5\n",
    "        self.fc2= Linear(input_dim=4096, output_dim=4096,\n",
    "                      act='relu')\n",
    "        self.drop2_ratio = 0.5\n",
    "        self.fc3 = Linear(input_dim=4096, output_dim=1)\n",
    "        \n",
    "    def forward(self, x):\n",
    "        for item in self.vgg_blocks:\n",
    "            x = item(x)\n",
    "        x = fluid.layers.reshape(x, [x.shape[0], -1])\n",
    "        x = fluid.layers.dropout(self.fc1(x), self.drop1_ratio)\n",
    "        x = fluid.layers.dropout(self.fc2(x), self.drop2_ratio)\n",
    "        x = self.fc3(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with fluid.dygraph.guard():\n",
    "    model = VGG()\n",
    "\n",
    "train(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过运行结果可以发现，在眼疾筛查数据集iChallenge-PM上使用VGG，loss能有效的下降，经过5个epoch的训练，在验证集上的准确率可以达到94%左右。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# GoogLeNet\n",
    "\n",
    "GoogLeNet的主要特点是网络不仅有深度，还在横向上具有“宽度”。\n",
    "\n",
    "由于图像信息在空间尺寸上的巨大差异，如何选择合适的卷积核大小来提取特征就显得比较困难了。\n",
    "\n",
    "空间分布范围更广的图像信息适合用较大的卷积核来提取其特征，\n",
    "\n",
    "空间分布范围较小的图像信息则适合用较小的卷积核来提取其特征。\n",
    "\n",
    "为了解决这个问题，GoogLeNet提出了一种被称为Inception模块的方案。\n",
    "\n",
    "如 **图4** 所示：\n",
    "\n",
    "------\n",
    "\n",
    "**说明：**\n",
    "\n",
    "------\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/914011692ba345df82035895748e995aabd2d26b9eea4b88876f1a9bf4e82c6b\" width = \"800\"></center>\n",
    "<center><br>图4：Inception模块结构示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "图4(a)是Inception模块的设计思想，\n",
    "1.使用3个不同大小的卷积核对输入图片进行卷积操作，并附加最大池化，\n",
    "\n",
    "2.将这4个操作的输出沿着通道这一维度进行拼接，构成的输出特征图将会包含经过不同大小的卷积核提取出来的特征。\n",
    "\n",
    "3.Inception模块采用多通路(multi-path)的设计形式，每个支路使用不同大小的卷积核，最终输出特征图的通道数是每个支路输出通道数的总和，这将会导致输出通道数变得很大，尤其是使用多个Inception模块串联操作的时候，模型参数量会变得非常大。\n",
    "\n",
    "**为了减小参数量**\n",
    "\n",
    "1.Inception模块使用了图(b)中的设计方式，在每个3x3和5x5的卷积层之前，增加1x1的卷积层来控制输出通道数；\n",
    "\n",
    "2.在最大池化层后面增加1x1卷积层减小输出通道数。\n",
    "\n",
    "基于这一设计思想，形成了上图(b)中所示的结构。\n",
    "\n",
    "下面这段程序是Inception块的具体实现方式，可以对照图(b)和代码一起阅读。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Inception(fluid.dygraph.Layer):\n",
    "    def __init__(self, c1, c2, c3, c4, **kwargs):\n",
    "        '''\n",
    "        Inception模块的实现代码，\n",
    "        \n",
    "        c1,  图(b)中第一条支路1x1卷积的输出通道数，数据类型是整数\n",
    "        c2，图(b)中第二条支路卷积的输出通道数，数据类型是tuple或list, \n",
    "               其中c2[0]是1x1卷积的输出通道数，c2[1]是3x3\n",
    "        c3，图(b)中第三条支路卷积的输出通道数，数据类型是tuple或list, \n",
    "               其中c3[0]是1x1卷积的输出通道数，c3[1]是3x3\n",
    "        c4,  图(b)中第一条支路1x1卷积的输出通道数，数据类型是整数\n",
    "        '''\n",
    "        super(Inception, self).__init__()\n",
    "        # 依次创建Inception块每条支路上使用到的操作\n",
    "        self.p1_1 = Conv2D(num_filters=c1, \n",
    "                           filter_size=1, act='relu')\n",
    "        self.p2_1 = Conv2D(num_filters=c2[0], \n",
    "                           filter_size=1, act='relu')\n",
    "        self.p2_2 = Conv2D(num_filters=c2[1], \n",
    "                           filter_size=3, padding=1, act='relu')\n",
    "        self.p3_1 = Conv2D(num_filters=c3[0], \n",
    "                           filter_size=1, act='relu')\n",
    "        self.p3_2 = Conv2D(num_filters=c3[1], \n",
    "                           filter_size=5, padding=2, act='relu')\n",
    "        self.p4_1 = Pool2D(pool_size=3, \n",
    "                           pool_stride=1,  pool_padding=1, \n",
    "                           pool_type='max')\n",
    "        self.p4_2 = Conv2D(num_filters=c4, \n",
    "                           filter_size=1, act='relu')\n",
    "\n",
    "    def forward(self, x):\n",
    "        # 支路1只包含一个1x1卷积\n",
    "        p1 = self.p1_1(x)\n",
    "        # 支路2包含 1x1卷积 + 3x3卷积\n",
    "        p2 = self.p2_2(self.p2_1(x))\n",
    "        # 支路3包含 1x1卷积 + 5x5卷积\n",
    "        p3 = self.p3_2(self.p3_1(x))\n",
    "        # 支路4包含 最大池化和1x1卷积\n",
    "        p4 = self.p4_2(self.p4_1(x))\n",
    "        # 将每个支路的输出特征图拼接在一起作为最终的输出结果\n",
    "        return fluid.layers.concat([p1, p2, p3, p4], axis=1)  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "GoogLeNet的架构如 **图5** 所示，在主体卷积部分中使用5个模块（block），每个模块之间使用步幅为2的3 ×3最大池化层来减小输出高宽。\n",
    "* 第一模块使用一个64通道的7 × 7卷积层。\n",
    "* 第二模块使用2个卷积层:首先是64通道的1 × 1卷积层，然后是将通道增大3倍的3 × 3卷积层。\n",
    "* 第三模块串联2个完整的Inception块。\n",
    "* 第四模块串联了5个Inception块。\n",
    "* 第五模块串联了2 个Inception块。\n",
    "* 第五模块的后面紧跟输出层，使用全局平均池化层来将每个通道的高和宽变成1，最后接上一个输出个数为标签类别数的全连接层。\n",
    "\n",
    "-----\n",
    "说明：\n",
    "在原作者的论文中添加了图中所示的softmax1和softmax2两个辅助分类器，如下图所示，训练时将三个分类器的损失函数进行加权求和，以缓解梯度消失现象。这里的程序作了简化，没有加入辅助分类器。\n",
    "\n",
    "-----\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/9d0794b330934bc9be72cba9f056d62eb77d3ba6c2ac450fae64cf86d86f2e04\" width = \"600\"></center>\n",
    "<center><br>图5：GoogLeNet模型网络结构示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "GoogLeNet的具体实现如下代码所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# -*- coding:utf-8 -*-\n",
    "\n",
    "# GoogLeNet模型代码\n",
    "import numpy as np\n",
    "import paddle\n",
    "import paddle.fluid as fluid\n",
    "from paddle.fluid.layer_helper import LayerHelper\n",
    "from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, Linear\n",
    "from paddle.fluid.dygraph.base import to_variable\n",
    "\n",
    "# 定义Inception块\n",
    "class Inception(fluid.dygraph.Layer):\n",
    "    def __init__(self, c0,c1, c2, c3, c4, **kwargs):\n",
    "        '''\n",
    "        Inception模块的实现代码，\n",
    "        \n",
    "        c1,  图(b)中第一条支路1x1卷积的输出通道数，数据类型是整数\n",
    "        c2，图(b)中第二条支路卷积的输出通道数，数据类型是tuple或list, \n",
    "               其中c2[0]是1x1卷积的输出通道数，c2[1]是3x3\n",
    "        c3，图(b)中第三条支路卷积的输出通道数，数据类型是tuple或list, \n",
    "               其中c3[0]是1x1卷积的输出通道数，c3[1]是3x3\n",
    "        c4,  图(b)中第一条支路1x1卷积的输出通道数，数据类型是整数\n",
    "        '''\n",
    "        super(Inception, self).__init__()\n",
    "        # 依次创建Inception块每条支路上使用到的操作\n",
    "        self.p1_1 = Conv2D(num_channels=c0, num_filters=c1, \n",
    "                           filter_size=1, act='relu')\n",
    "        self.p2_1 = Conv2D(num_channels=c0, num_filters=c2[0], \n",
    "                           filter_size=1, act='relu')\n",
    "        self.p2_2 = Conv2D(num_channels=c2[0], num_filters=c2[1], \n",
    "                           filter_size=3, padding=1, act='relu')\n",
    "        self.p3_1 = Conv2D(num_channels=c0, num_filters=c3[0], \n",
    "                           filter_size=1, act='relu')\n",
    "        self.p3_2 = Conv2D(num_channels=c3[0], num_filters=c3[1], \n",
    "                           filter_size=5, padding=2, act='relu')\n",
    "        self.p4_1 = Pool2D(pool_size=3, \n",
    "                           pool_stride=1,  pool_padding=1, \n",
    "                           pool_type='max')\n",
    "        self.p4_2 = Conv2D(num_channels=c0, num_filters=c4, \n",
    "                           filter_size=1, act='relu')\n",
    "\n",
    "    def forward(self, x):\n",
    "        # 支路1只包含一个1x1卷积\n",
    "        p1 = self.p1_1(x)\n",
    "        # 支路2包含 1x1卷积 + 3x3卷积\n",
    "        p2 = self.p2_2(self.p2_1(x))\n",
    "        # 支路3包含 1x1卷积 + 5x5卷积\n",
    "        p3 = self.p3_2(self.p3_1(x))\n",
    "        # 支路4包含 最大池化和1x1卷积\n",
    "        p4 = self.p4_2(self.p4_1(x))\n",
    "        # 将每个支路的输出特征图拼接在一起作为最终的输出结果\n",
    "        return fluid.layers.concat([p1, p2, p3, p4], axis=1)  \n",
    "    \n",
    "class GoogLeNet(fluid.dygraph.Layer):\n",
    "    def __init__(self):\n",
    "        super(GoogLeNet, self).__init__()\n",
    "        # GoogLeNet包含五个模块，每个模块后面紧跟一个池化层\n",
    "        # 第一个模块包含1个卷积层\n",
    "        self.conv1 = Conv2D(num_channels=3, num_filters=64, filter_size=7, \n",
    "                            padding=3, act='relu')\n",
    "        # 3x3最大池化\n",
    "        self.pool1 = Pool2D(pool_size=3, pool_stride=2,  \n",
    "                            pool_padding=1, pool_type='max')\n",
    "        # 第二个模块包含2个卷积层\n",
    "        self.conv2_1 = Conv2D(num_channels=64, num_filters=64, \n",
    "                              filter_size=1, act='relu')\n",
    "        self.conv2_2 = Conv2D(num_channels=64, num_filters=192, \n",
    "                              filter_size=3, padding=1, act='relu')\n",
    "        # 3x3最大池化\n",
    "        self.pool2 = Pool2D(pool_size=3, pool_stride=2,  \n",
    "                            pool_padding=1, pool_type='max')\n",
    "        # 第三个模块包含2个Inception块\n",
    "        self.block3_1 = Inception(192, 64, (96, 128), (16, 32), 32)\n",
    "        self.block3_2 = Inception(256, 128, (128, 192), (32, 96), 64)\n",
    "        # 3x3最大池化\n",
    "        self.pool3 = Pool2D(pool_size=3, pool_stride=2,  \n",
    "                               pool_padding=1, pool_type='max')\n",
    "        # 第四个模块包含5个Inception块\n",
    "        self.block4_1 = Inception(480, 192, (96, 208), (16, 48), 64)\n",
    "        self.block4_2 = Inception(512, 160, (112, 224), (24, 64), 64)\n",
    "        self.block4_3 = Inception(512, 128, (128, 256), (24, 64), 64)\n",
    "        self.block4_4 = Inception(512, 112, (144, 288), (32, 64), 64)\n",
    "        self.block4_5 = Inception(528, 256, (160, 320), (32, 128), 128)\n",
    "        # 3x3最大池化\n",
    "        self.pool4 = Pool2D(pool_size=3, pool_stride=2,  \n",
    "                               pool_padding=1, pool_type='max')\n",
    "        # 第五个模块包含2个Inception块\n",
    "        self.block5_1 = Inception(832, 256, (160, 320), (32, 128), 128)\n",
    "        self.block5_2 = Inception(832, 384, (192, 384), (48, 128), 128)\n",
    "        # 全局池化，尺寸用的是global_pooling，pool_stride不起作用\n",
    "        self.pool5 = Pool2D(pool_stride=1, \n",
    "                               global_pooling=True, pool_type='avg')\n",
    "        self.fc = Linear(input_dim=1024, output_dim=1, act=None)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.pool1(self.conv1(x))\n",
    "        x = self.pool2(self.conv2_2(self.conv2_1(x)))\n",
    "        x = self.pool3(self.block3_2(self.block3_1(x)))\n",
    "        x = self.block4_3(self.block4_2(self.block4_1(x)))\n",
    "        x = self.pool4(self.block4_5(self.block4_4(x)))\n",
    "        x = self.pool5(self.block5_2(self.block5_1(x)))\n",
    "        x = fluid.layers.reshape(x, [x.shape[0], -1])\n",
    "        x = self.fc(x)\n",
    "        return x\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with fluid.dygraph.guard():\n",
    "    model = GoogLeNet()\n",
    "\n",
    "train(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过运行结果可以发现，使用GoogLeNet在眼疾筛查数据集iChallenge-PM上，loss能有效的下降，经过5个epoch的训练，在验证集上的准确率可以达到**95%**左右。\n",
    "\n",
    "-----"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ResNet\n",
    "\n",
    "ResNet将识别错误率降低到了3.6%，这个结果甚至超出了正常人眼识别的精度。\n",
    "\n",
    "\n",
    "> 实践表明，增加网络的层数之后，训练误差往往不降反升。Kaiming He等人提出了残差网络ResNet来解决问题，\n",
    "\n",
    "其基本思想如 **图6**所示。\n",
    "* 图6(a)：表示增加网络的时候，将x映射成$y=F(x)$输出。\n",
    "\n",
    "* 图6(b)：对图6(a)作了改进，输出$y=F(x) + x$。这时不是直接学习输出特征y的表示，而是学习$y-x$。\n",
    "\n",
    "  - 如果想学习出原模型的表示，只需将F(x)的参数全部设置为0，则$y=x$是恒等映射。\n",
    "  \n",
    "  - $F(x) = y - x$也叫做残差项，如果$x\\rightarrow y$的映射接近恒等映射，图6(b)中通过学习残差项也比图6(a)学习完整映射形式更加容易。\n",
    "\n",
    "\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/d2e891d19d39480fa9777e264a98dbb5bfe41964d004422da299f27c37c211fc\" width = \"500\"></center>\n",
    "<center><br>图6：残差块设计思想</br></center>\n",
    "<br></br>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "图6(b)的结构是残差网络的基础，这种结构也叫做残差块（residual block）。输入x通过跨层连接，能更快的向前传播数据，或者向后传播梯度。残差块的具体设计方案如 **图**7 所示，这种设计方案也成称作瓶颈结构（BottleNeck）。\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/322b26358d43401ba81546dd134a310cfb11ecafb3314aab88b5885ff642870b\" width = \"500\"></center>\n",
    "<center><br>图7：残差块结构示意图</br></center>\n",
    "<br></br>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下图表示出了ResNet-50的结构，一共包含49层卷积和1层全连接，所以被称为ResNet-50。\n",
    "\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/b31389ddfdc84276873c2fc3ee5ae149e96cd1f0edf84466a35661959bbcb3dd\" width = \"1000\"></center>\n",
    "<center><br>图8：ResNet-50模型网络结构示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "ResNet-50的具体实现如下代码所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# -*- coding:utf-8 -*-\n",
    "\n",
    "# ResNet模型代码\n",
    "import numpy as np\n",
    "import paddle\n",
    "import paddle.fluid as fluid\n",
    "from paddle.fluid.layer_helper import LayerHelper\n",
    "from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, Linear\n",
    "from paddle.fluid.dygraph.base import to_variable\n",
    "\n",
    "# ResNet中使用了BatchNorm层，在卷积层的后面加上BatchNorm以提升数值稳定性\n",
    "# 定义卷积批归一化块\n",
    "class ConvBNLayer(fluid.dygraph.Layer):\n",
    "    def __init__(self,\n",
    "                 num_channels,\n",
    "                 num_filters,\n",
    "                 filter_size,\n",
    "                 stride=1,\n",
    "                 groups=1,\n",
    "                 act=None):\n",
    "        \"\"\"\n",
    "        \n",
    "        num_channels, 卷积层的输入通道数\n",
    "        num_filters, 卷积层的输出通道数\n",
    "        stride, 卷积层的步幅\n",
    "        groups, 分组卷积的组数，默认groups=1不使用分组卷积\n",
    "        act, 激活函数类型，默认act=None不使用激活函数\n",
    "        \"\"\"\n",
    "        super(ConvBNLayer, self).__init__()\n",
    "\n",
    "        # 创建卷积层\n",
    "        self._conv = Conv2D(\n",
    "            num_channels=num_channels,\n",
    "            num_filters=num_filters,\n",
    "            filter_size=filter_size,\n",
    "            stride=stride,\n",
    "            padding=(filter_size - 1) // 2,\n",
    "            groups=groups,\n",
    "            act=None,\n",
    "            bias_attr=False)\n",
    "\n",
    "        # 创建BatchNorm层\n",
    "        self._batch_norm = BatchNorm(num_filters, act=act)\n",
    "\n",
    "    def forward(self, inputs):\n",
    "        y = self._conv(inputs)\n",
    "        y = self._batch_norm(y)\n",
    "        return y\n",
    "\n",
    "# 定义残差块\n",
    "# 每个残差块会对输入图片做三次卷积，然后跟输入图片进行短接\n",
    "# 如果残差块中第三次卷积输出特征图的形状与输入不一致，则对输入图片做1x1卷积，将其输出形状调整成一致\n",
    "class BottleneckBlock(fluid.dygraph.Layer):\n",
    "    def __init__(self,\n",
    "                 num_channels,\n",
    "                 num_filters,\n",
    "                 stride,\n",
    "                 shortcut=True):\n",
    "        super(BottleneckBlock, self).__init__()\n",
    "        # 创建第一个卷积层 1x1\n",
    "        self.conv0 = ConvBNLayer(\n",
    "            num_channels=num_channels,\n",
    "            num_filters=num_filters,\n",
    "            filter_size=1,\n",
    "            act='relu')\n",
    "        # 创建第二个卷积层 3x3\n",
    "        self.conv1 = ConvBNLayer(\n",
    "            num_channels=num_filters,\n",
    "            num_filters=num_filters,\n",
    "            filter_size=3,\n",
    "            stride=stride,\n",
    "            act='relu')\n",
    "        # 创建第三个卷积 1x1，但输出通道数乘以4\n",
    "        self.conv2 = ConvBNLayer(\n",
    "            num_channels=num_filters,\n",
    "            num_filters=num_filters * 4,\n",
    "            filter_size=1,\n",
    "            act=None)\n",
    "\n",
    "        # 如果conv2的输出跟此残差块的输入数据形状一致，则shortcut=True\n",
    "        # 否则shortcut = False，添加1个1x1的卷积作用在输入数据上，使其形状变成跟conv2一致\n",
    "        if not shortcut:\n",
    "            self.short = ConvBNLayer(\n",
    "                num_channels=num_channels,\n",
    "                num_filters=num_filters * 4,\n",
    "                filter_size=1,\n",
    "                stride=stride)\n",
    "\n",
    "        self.shortcut = shortcut\n",
    "\n",
    "        self._num_channels_out = num_filters * 4\n",
    "\n",
    "    def forward(self, inputs):\n",
    "        y = self.conv0(inputs)\n",
    "        conv1 = self.conv1(y)\n",
    "        conv2 = self.conv2(conv1)\n",
    "\n",
    "        # 如果shortcut=True，直接将inputs跟conv2的输出相加\n",
    "        # 否则需要对inputs进行一次卷积，将形状调整成跟conv2输出一致\n",
    "        if self.shortcut:\n",
    "            short = inputs\n",
    "        else:\n",
    "            short = self.short(inputs)\n",
    "\n",
    "        y = fluid.layers.elementwise_add(x=short, y=conv2)\n",
    "        layer_helper = LayerHelper(self.full_name(), act='relu')\n",
    "        return layer_helper.append_activation(y)\n",
    "\n",
    "# 定义ResNet模型\n",
    "class ResNet(fluid.dygraph.Layer):\n",
    "    def __init__(self, layers=50, class_dim=1):\n",
    "        \"\"\"\n",
    "        \n",
    "        layers, 网络层数，可以是50, 101或者152\n",
    "        class_dim，分类标签的类别数\n",
    "        \"\"\"\n",
    "        super(ResNet, self).__init__()\n",
    "        self.layers = layers\n",
    "        supported_layers = [50, 101, 152]\n",
    "        assert layers in supported_layers, \\\n",
    "            \"supported layers are {} but input layer is {}\".format(supported_layers, layers)\n",
    "\n",
    "        if layers == 50:\n",
    "            #ResNet50包含多个模块，其中第2到第5个模块分别包含3、4、6、3个残差块\n",
    "            depth = [3, 4, 6, 3]\n",
    "        elif layers == 101:\n",
    "            #ResNet101包含多个模块，其中第2到第5个模块分别包含3、4、23、3个残差块\n",
    "            depth = [3, 4, 23, 3]\n",
    "        elif layers == 152:\n",
    "            #ResNet50包含多个模块，其中第2到第5个模块分别包含3、8、36、3个残差块\n",
    "            depth = [3, 8, 36, 3]\n",
    "        \n",
    "        # 残差块中使用到的卷积的输出通道数\n",
    "        num_filters = [64, 128, 256, 512]\n",
    "\n",
    "        # ResNet的第一个模块，包含1个7x7卷积，后面跟着1个最大池化层\n",
    "        self.conv = ConvBNLayer(\n",
    "            num_channels=3,\n",
    "            num_filters=64,\n",
    "            filter_size=7,\n",
    "            stride=2,\n",
    "            act='relu')\n",
    "        self.pool2d_max = Pool2D(\n",
    "            pool_size=3,\n",
    "            pool_stride=2,\n",
    "            pool_padding=1,\n",
    "            pool_type='max')\n",
    "\n",
    "        # ResNet的第二到第五个模块c2、c3、c4、c5\n",
    "        self.bottleneck_block_list = []\n",
    "        num_channels = 64\n",
    "        for block in range(len(depth)):\n",
    "            shortcut = False\n",
    "            for i in range(depth[block]):\n",
    "                bottleneck_block = self.add_sublayer(\n",
    "                    'bb_%d_%d' % (block, i),\n",
    "                    BottleneckBlock(\n",
    "                        num_channels=num_channels,\n",
    "                        num_filters=num_filters[block],\n",
    "                        stride=2 if i == 0 and block != 0 else 1, # c3、c4、c5将会在第一个残差块使用stride=2；其余所有残差块stride=1\n",
    "                        shortcut=shortcut))\n",
    "                num_channels = bottleneck_block._num_channels_out\n",
    "                self.bottleneck_block_list.append(bottleneck_block)\n",
    "                shortcut = True\n",
    "\n",
    "        # 在c5的输出特征图上使用全局池化\n",
    "        self.pool2d_avg = Pool2D(pool_size=7, pool_type='avg', global_pooling=True)\n",
    "\n",
    "        # stdv用来作为全连接层随机初始化参数的方差\n",
    "        import math\n",
    "        stdv = 1.0 / math.sqrt(2048 * 1.0)\n",
    "        \n",
    "        # 创建全连接层，输出大小为类别数目\n",
    "        self.out = Linear(input_dim=2048, output_dim=class_dim,\n",
    "                      param_attr=fluid.param_attr.ParamAttr(\n",
    "                          initializer=fluid.initializer.Uniform(-stdv, stdv)))\n",
    "\n",
    "        \n",
    "    def forward(self, inputs):\n",
    "        y = self.conv(inputs)\n",
    "        y = self.pool2d_max(y)\n",
    "        for bottleneck_block in self.bottleneck_block_list:\n",
    "            y = bottleneck_block(y)\n",
    "        y = self.pool2d_avg(y)\n",
    "        y = fluid.layers.reshape(y, [y.shape[0], -1])\n",
    "        y = self.out(y)\n",
    "        return y\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with fluid.dygraph.guard():\n",
    "    model = ResNet()\n",
    "\n",
    "train(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过运行结果可以发现，使用ResNet在眼疾筛查数据集iChallenge-PM上，loss能有效的下降，经过5个epoch的训练，在验证集上的准确率可以达到95%左右。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
