{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### TF描述卷积层\n",
    "    tf.keras.layers.Conv2D (\n",
    "    filters = 卷积核个数,\n",
    "    kernel_size = 卷积核尺寸, #正方形写核长整数，或（核高h，核宽w）\n",
    "    strides = 滑动步长, #横纵向相同写步长整数，或使用元组形式给定(纵向步长h，横向步长w)，默认1\n",
    "    padding = “same” or “valid”, #使用全零填充是“same”，不使用是“valid”（默认）\n",
    "    activation = “ relu ” or “ sigmoid ” or “ tanh ” or “ softmax”等 , #给出激活函数如有BN（批标准化操作）此处不写\n",
    "    input_shape = (高, 宽 , 通道数) #输入特征图维度，可省略\n",
    ")\n",
    "\n",
    "    eg：\n",
    "    model = tf.keras.models.Sequential([\n",
    "    Conv2D(6, 5, padding='valid', activation='sigmoid'),#第一层：6个卷积核，卷积核尺寸5，步长1，不使用全零填充，sigmoid激活函数\n",
    "    MaxPool2D(2, 2),\n",
    "    Conv2D(6, (5, 5), padding='valid', activation='sigmoid'),#第二层\n",
    "    MaxPool2D(2, (2, 2)),\n",
    "    Conv2D(filters=6, kernel_size=(5, 5),padding='valid', activation='sigmoid'),#第三层\n",
    "    MaxPool2D(pool_size=(2, 2), strides=2),\n",
    "    Flatten(),\n",
    "    Dense(10, activation='softmax')\n",
    "])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 批标准化（Batch Normalization， BN）\n",
    "    标准化：使数据符合0均值，1为标准差的分布。\n",
    "    批标准化：对一小批数据（batch），做标准化处理 ，使数据回归标准正态分布，常用在卷积操作和激活操作之间。\n",
    "    \n",
    "$$ H_i^{/k}\\frac{H_i^k-\\mu _{batch}^k}{\\sigma _{batch}^k} $$\n",
    "$$ H_i^k:批标准化前，第k个卷积核，输出特征图中第 i 个像素点$$\n",
    "$$ \\mu _{batch}^k:批标准化前，第k个卷积核，batch张输出特征图中所有像素点平均值$$\n",
    "$$\\sigma _{batch}^k:批标准化前，第k个卷积核，batch张输出特征图中所有像素点标准差 $$\n",
    "\n",
    "    BN操作：\n",
    "    将原本偏移的数据，重新拉回到0均值，\n",
    "    使进入激活函数的数据分布在激活函数线性区，\n",
    "    使得输入数据的微小变化，更明显的体现到激活函数的输出，\n",
    "    提升了激活函数对数据的区分力。\n",
    "    \n",
    "    但是这种简单的特征数据标准化，使特征数据完全满足标准正态分布，\n",
    "    集中在激活函数中心的线性区域，使使激活函数丧失了非线性特性，\n",
    "    因此，在BN操作中为每个卷积核引入了两个可训练参数：\n",
    "    缩放因子γ；偏移因子β； \n",
    "$$ X_i^k = \\gamma _k H_i^{/k} + \\beta _k$$\n",
    "    在反向传播时，缩放因子γ、偏移因子β会与其他代训练参数一起被训练优化，\n",
    "    使标准正态分布后的特征数据，通过缩放因子和偏移因子，优化了特征数据分布的宽窄和偏移量，\n",
    "    保证了网络的非线性表达力\n",
    "    \n",
    "    结构：\n",
    "    卷积Convolutional\n",
    "    批标准化BN\n",
    "    激活Activation\n",
    "    \n",
    "    code：\n",
    "    TF描述批标准化：tf.keras.layers.BatchNormalization()\n",
    "    eg：\n",
    "    model = tf.keras.models.Sequential([\n",
    "    Conv2D(filters=6, kernel_size=(5, 5), padding='same'), # 卷积层\n",
    "    BatchNormalization(), # BN层\n",
    "    Activation('relu'), # 激活层\n",
    "    MaxPool2D(pool_size=(2, 2), strides=2, padding='same'), # 池化层\n",
    "    Dropout(0.2), # dropout层\n",
    "    ])\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 池化Pooling\n",
    "    池化用于减少特征数据量。\n",
    "    最大值池化可提取图片纹理，均值池化可保留背景特征。\n",
    "    max pool：输出池化框中像素点的最大值\n",
    "    mean pool：输出池化框中像素点的均值\n",
    "    \n",
    "    code：\n",
    "    TF描述池化\n",
    "    \n",
    "    tf.keras.layers.MaxPool2D(#最大池化-纹理\n",
    "    pool_size=池化核尺寸，#正方形写核长整数，或（核高h，核宽w）\n",
    "    strides=池化步长，#步长整数， 或(纵向步长h，横向步长w)，默认为pool_size\n",
    "    padding=‘valid’or‘same’ #使用全零填充是“same”，不使用是“valid”（默认）\n",
    "    )\n",
    "    \n",
    "    tf.keras.layers.AveragePooling2D(#均值池化-特征\n",
    "    pool_size=池化核尺寸，#正方形写核长整数，或（核高h，核宽w）\n",
    "    strides=池化步长，#步长整数， 或(纵向步长h，横向步长w)，默认为pool_size\n",
    "    padding=‘valid’or‘same’ #使用全零填充是“same”，不使用是“valid”（默认）\n",
    "    )\n",
    "    \n",
    "    eg：\n",
    "    model = tf.keras.models.Sequential([\n",
    "    Conv2D(filters=6, kernel_size=(5, 5), padding='same'), # 卷积层\n",
    "    BatchNormalization(), # BN层\n",
    "    Activation('relu'), # 激活层\n",
    "    MaxPool2D(pool_size=(2, 2), strides=2, padding='same'), # 池化层：池化核2*2，池化步长2，使用全零填充\n",
    "    Dropout(0.2), # dropout层\n",
    "    ])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 舍弃Dropout\n",
    "    在神经网络训练时，将一部分神经元按照一定概率从神经网络中暂时舍弃。神经网络使用时，被舍弃的神经元恢复链接。\n",
    "    \n",
    "    code：\n",
    "    TF描述舍弃\n",
    "    tf.keras.layers.Dropout(舍弃的概率)\n",
    "\n",
    "    model = tf.keras.models.Sequential([\n",
    "    Conv2D(filters=6, kernel_size=(5, 5), padding='same'), # 卷积层\n",
    "    BatchNormalization(), # BN层\n",
    "    Activation('relu'), # 激活层\n",
    "    MaxPool2D(pool_size=(2, 2), strides=2, padding='same'), # 池化层\n",
    "    Dropout(0.2), # dropout层：随机舍弃20%的神经元\n",
    "    ])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 卷积神经网络：借助卷积核提取特征后，送入全连接网络\n",
    "    \n",
    "    卷积神经网络网络的主要模块：\n",
    "    \n",
    "    卷积Convolutional\n",
    "    批标准化BN\n",
    "    激活Activation\n",
    "    池化Pooling\n",
    "    \n",
    "    全连接FC\n",
    "    \n",
    "    卷积就是特征提取器，就是‘CBAPD’\n",
    "    model = tf.keras.models.Sequential([\n",
    "    \n",
    "    C    Conv2D(filters=6, kernel_size=(5, 5), padding='same'), # 卷积层\n",
    "    B    BatchNormalization(), # BN层\n",
    "    A    Activation('relu'), # 激活层\n",
    "    P    MaxPool2D(pool_size=(2, 2), strides=2, padding='same'), # 池化层\n",
    "    D    Dropout(0.2), # dropout层\n",
    "    \n",
    "    ])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
