{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0db6d855",
   "metadata": {},
   "source": [
    "# 第三节：创建网络\n",
    "mindspore.nn提供了各类网络模型的基础模块，nn中的Cell类是构建所有网络的基类，也是网络的基本单元。构建神经网络时，需要继承Cell类，并重写init方法和construct方法，我们以定义LeNet5为例："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8832176",
   "metadata": {},
   "source": [
    "## 一、示例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ee6d724",
   "metadata": {},
   "source": [
    "我们需要在init方法中定义各网络层（各网络层的定义方法和参数可参考官方文档），在construct方法中构建前向网络。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "912ce353",
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.nn as nn\n",
    "\n",
    "class LeNet5(nn.Cell):\n",
    "    \"\"\"\n",
    "    LeNet-5网络结构\n",
    "    \"\"\"\n",
    "    def __init__(self, num_class=10, num_channel=1):\n",
    "        super(LeNet5, self).__init__()\n",
    "        # 卷积层，输入的通道数为num_channel,输出的通道数为6,卷积核大小为5*5\n",
    "        self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')\n",
    "        # 卷积层，输入的通道数为6，输出的通道数为16,卷积核大小为5*5\n",
    "        self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')\n",
    "        # 全连接层，输入个数为16*5*5，输出个数为120\n",
    "        self.fc1 = nn.Dense(16 * 5 * 5, 120)\n",
    "        # 全连接层，输入个数为120，输出个数为84\n",
    "        self.fc2 = nn.Dense(120, 84)\n",
    "        # 全连接层，输入个数为84，分类的个数为num_class\n",
    "        self.fc3 = nn.Dense(84, num_class)\n",
    "        # ReLU激活函数\n",
    "        self.relu = nn.ReLU()\n",
    "        # 池化层\n",
    "        self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "        # 多维数组展平为一维数组\n",
    "        self.flatten = nn.Flatten()\n",
    "\n",
    "    def construct(self, x):\n",
    "        # 使用定义好的运算构建前向网络\n",
    "        x = self.conv1(x)\n",
    "        x = self.relu(x)\n",
    "        x = self.max_pool2d(x)\n",
    "        x = self.conv2(x)\n",
    "        x = self.relu(x)\n",
    "        x = self.max_pool2d(x)\n",
    "        x = self.flatten(x)\n",
    "        x = self.fc1(x)\n",
    "        x = self.relu(x)\n",
    "        x = self.fc2(x)\n",
    "        x = self.relu(x)\n",
    "        x = self.fc3(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "9046aec5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LeNet5<\n",
      "  (conv1): Conv2d<input_channels=1, output_channels=6, kernel_size=(5, 5), stride=(1, 1), pad_mode=valid, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>\n",
      "  (conv2): Conv2d<input_channels=6, output_channels=16, kernel_size=(5, 5), stride=(1, 1), pad_mode=valid, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>\n",
      "  (fc1): Dense<input_channels=400, output_channels=120, has_bias=True>\n",
      "  (fc2): Dense<input_channels=120, output_channels=84, has_bias=True>\n",
      "  (fc3): Dense<input_channels=84, output_channels=10, has_bias=True>\n",
      "  (relu): ReLU<>\n",
      "  (max_pool2d): MaxPool2d<kernel_size=2, stride=2, pad_mode=VALID>\n",
      "  (flatten): Flatten<>\n",
      "  >\n"
     ]
    }
   ],
   "source": [
    "# 查看model结构\n",
    "model = LeNet5()\n",
    "print(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e586cd7b",
   "metadata": {},
   "source": [
    "## 二、尝试定义网络"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "e6790e14",
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.nn as nn\n",
    "\n",
    "class Mymodel(nn.Cell):\n",
    "    \"\"\"定义一个神经网络\"\"\"\n",
    "    def __init__(self):\n",
    "        # 继承父类\n",
    "        super(Mymodel, self).__init__()\n",
    "        # 定义网络\n",
    "        self.layer1 = nn.ReLU()\n",
    "        self.layer2 = nn.Sigmoid()\n",
    "    \n",
    "    def construct(self, x):\n",
    "        x = self.layer1(x)\n",
    "        x = self.layer2(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "1645abf9",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mymodel<\n",
      "  (layer1): ReLU<>\n",
      "  (layer2): Sigmoid<>\n",
      "  >\n"
     ]
    }
   ],
   "source": [
    "model = Mymodel()\n",
    "print(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ec10759",
   "metadata": {},
   "source": [
    "## 三、试用模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "9ab81d09",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[0.5        0.5        0.95257413 0.9820138 ]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import mindspore as ms\n",
    "arr = np.array([[-1, -2, 3, 4]])\n",
    "tensor = ms.Tensor(arr, ms.float32)\n",
    "\n",
    "output = model.construct(tensor)\n",
    "print(output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "caefcdf6",
   "metadata": {},
   "source": [
    "有些神经网络在运行过程中会对模型参数进行不断优化，在训练过程中，我们可以使用\n",
    "get_parameters()来查看参数，其返回迭代器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "c9bdca43",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "layer:conv1.weight, shape:(6, 1, 5, 5), dtype:Float32, requeires_grad:True\n",
      "layer:conv2.weight, shape:(16, 6, 5, 5), dtype:Float32, requeires_grad:True\n",
      "layer:fc1.weight, shape:(120, 400), dtype:Float32, requeires_grad:True\n",
      "layer:fc1.bias, shape:(120,), dtype:Float32, requeires_grad:True\n",
      "layer:fc2.weight, shape:(84, 120), dtype:Float32, requeires_grad:True\n",
      "layer:fc2.bias, shape:(84,), dtype:Float32, requeires_grad:True\n",
      "layer:fc3.weight, shape:(10, 84), dtype:Float32, requeires_grad:True\n",
      "layer:fc3.bias, shape:(10,), dtype:Float32, requeires_grad:True\n"
     ]
    }
   ],
   "source": [
    "model = LeNet5()\n",
    "for item in model.get_parameters():\n",
    "    print(f\"layer:{item.name}, shape:{item.shape}, dtype:{item.dtype}, requeires_grad:{item.requires_grad}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2d2deba",
   "metadata": {},
   "source": [
    "## 四、快速构建"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "357b219a",
   "metadata": {},
   "source": [
    "对于一些经典的网络模型，我们不需要重新写定义，可以直接调用已构建好的网络模型接口，图像处理相关模型可在mindvision.classification.models中调用,可查看官网文档。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "99109954",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "layer:conv1.weight, shape:(6, 1, 5, 5), dtype:Float32, requeires_grad:True\n",
      "layer:conv2.weight, shape:(16, 6, 5, 5), dtype:Float32, requeires_grad:True\n",
      "layer:fc1.weight, shape:(120, 400), dtype:Float32, requeires_grad:True\n",
      "layer:fc1.bias, shape:(120,), dtype:Float32, requeires_grad:True\n",
      "layer:fc2.weight, shape:(84, 120), dtype:Float32, requeires_grad:True\n",
      "layer:fc2.bias, shape:(84,), dtype:Float32, requeires_grad:True\n",
      "layer:fc3.weight, shape:(10, 84), dtype:Float32, requeires_grad:True\n",
      "layer:fc3.bias, shape:(10,), dtype:Float32, requeires_grad:True\n"
     ]
    }
   ],
   "source": [
    "from mindvision.classification.models import LeNet5\n",
    "\n",
    "model = LeNet5()\n",
    "for m in model.get_parameters():\n",
    "    print(f\"layer:{m.name}, shape:{m.shape}, dtype:{m.dtype}, requeires_grad:{m.requires_grad}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "mindspore",
   "language": "python",
   "name": "mindvision"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
