{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第一部分 数学理论"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第一小节 张量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "张量是具有统一类型（称为 `dtype`）的多维数组\n",
    "张量有形状。下面是几个相关术语：\n",
    "\n",
    "- **形状**：张量的每个轴的长度（元素数量）。\n",
    "- **秩**：张量轴数。标量的秩为 0，向量的秩为 1，矩阵的秩为 2。\n",
    "- **轴**或**维度**：张量的一个特殊维度。\n",
    "- **大小**：张量的总项数，即乘积形状向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "先看看两秩张量-矩阵"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[[1. 2.]\n",
      " [3. 4.]\n",
      " [5. 6.]], shape=(3, 2), dtype=float16)\n"
     ]
    }
   ],
   "source": [
    "rank_2_tensor = tf.constant([[1, 2],\n",
    "                             [3, 4],\n",
    "                             [5, 6]], dtype=tf.float16)\n",
    "print(rank_2_tensor)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后是三秩张量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[[[ 0  1  2  3  4]\n",
      "  [ 5  6  7  8  9]]\n",
      "\n",
      " [[10 11 12 13 14]\n",
      "  [15 16 17 18 19]]\n",
      "\n",
      " [[20 21 22 23 24]\n",
      "  [25 26 27 28 29]]], shape=(3, 2, 5), dtype=int32)\n"
     ]
    }
   ],
   "source": [
    "# There can be an arbitrary number of\n",
    "# axes (sometimes called \"dimensions\")\n",
    "rank_3_tensor = tf.constant([\n",
    "  [[0, 1, 2, 3, 4],\n",
    "   [5, 6, 7, 8, 9]],\n",
    "  [[10, 11, 12, 13, 14],\n",
    "   [15, 16, 17, 18, 19]],\n",
    "  [[20, 21, 22, 23, 24],\n",
    "   [25, 26, 27, 28, 29]],])\n",
    "                    \n",
    "print(rank_3_tensor)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "张量的运算也有很多，比如"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[[2 3]\n",
      " [4 5]], shape=(2, 2), dtype=int32) \n",
      "\n",
      "tf.Tensor(\n",
      "[[1 2]\n",
      " [3 4]], shape=(2, 2), dtype=int32) \n",
      "\n",
      "tf.Tensor(\n",
      "[[3 3]\n",
      " [7 7]], shape=(2, 2), dtype=int32) \n",
      "\n"
     ]
    }
   ],
   "source": [
    "a = tf.constant([[1, 2],\n",
    "                 [3, 4]])\n",
    "b = tf.constant([[1, 1],\n",
    "                 [1, 1]]) # Could have also said `tf.ones([2,2])`\n",
    "\n",
    "print(tf.add(a, b), \"\\n\")\n",
    "print(tf.multiply(a, b), \"\\n\")\n",
    "print(tf.matmul(a, b), \"\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "形状相关参数介绍;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Type of every element: <dtype: 'int32'>\n",
      "Number of dimensions: 2\n",
      "Shape of tensor: (2, 2)\n",
      "Elements along axis 0 of tensor: 2\n",
      "Elements along the last axis of tensor: 2\n",
      "Total number of elements (3*2*4*5):  4\n"
     ]
    }
   ],
   "source": [
    "print(\"Type of every element:\", a.dtype)\n",
    "print(\"Number of dimensions:\", a.ndim)\n",
    "print(\"Shape of tensor:\", a.shape)\n",
    "print(\"Elements along axis 0 of tensor:\", a.shape[0])\n",
    "print(\"Elements along the last axis of tensor:\", a.shape[-1])\n",
    "print(\"Total number of elements : \", tf.size(a).numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对于一些形状方面的操作:核心->   -1 means whatever fits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23\n",
      " 24 25 26 27 28 29], shape=(30,), dtype=int32)\n"
     ]
    }
   ],
   "source": [
    "print(tf.reshape(rank_3_tensor, [-1]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[[ 0  1  2  3  4]\n",
      " [ 5  6  7  8  9]\n",
      " [10 11 12 13 14]\n",
      " [15 16 17 18 19]\n",
      " [20 21 22 23 24]\n",
      " [25 26 27 28 29]], shape=(6, 5), dtype=int32) \n",
      "\n",
      "tf.Tensor(\n",
      "[[ 0  1  2  3  4  5  6  7  8  9]\n",
      " [10 11 12 13 14 15 16 17 18 19]\n",
      " [20 21 22 23 24 25 26 27 28 29]], shape=(3, 10), dtype=int32)\n"
     ]
    }
   ],
   "source": [
    "print(tf.reshape(rank_3_tensor, [3*2, 5]), \"\\n\")\n",
    "print(tf.reshape(rank_3_tensor, [3, -1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "稀疏张量更方便写出来的一种方式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "SparseTensor(indices=tf.Tensor(\n",
      "[[0 0]\n",
      " [1 2]], shape=(2, 2), dtype=int64), values=tf.Tensor([1 2], shape=(2,), dtype=int32), dense_shape=tf.Tensor([3 4], shape=(2,), dtype=int64)) \n",
      "\n",
      "tf.Tensor(\n",
      "[[1 0 0 0]\n",
      " [0 0 2 0]\n",
      " [0 0 0 0]], shape=(3, 4), dtype=int32)\n"
     ]
    }
   ],
   "source": [
    "# Sparse tensors store values by index in a memory-efficient manner\n",
    "sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 0], [1, 2]],\n",
    "                                       values=[1, 2],\n",
    "                                       dense_shape=[3, 4])\n",
    "print(sparse_tensor, \"\\n\")\n",
    "\n",
    "# We can convert sparse tensors to dense\n",
    "print(tf.sparse.to_dense(sparse_tensor))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第二小节 变量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorFlow **变量**是用于表示程序处理的共享持久状态的推荐方法，可以方便的改变其值，比如numpy的多维数组你改一下值其实不容易对吧。本指南介绍在 TensorFlow 中如何创建、更新和管理 `tf.Variable` 的实例。\n",
    "\n",
    "变量通过 `tf.Variable` 类进行创建和跟踪。`tf.Variable` 表示张量，对它执行运算可以改变其值。利用特定运算可以读取和修改此张量的值。更高级的库（如 `tf.keras`）使用 `tf.Variable` 来存储模型参数。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<tf.Variable 'Variable:0' shape=(2, 2) dtype=float32, numpy=\n",
      "array([[1., 2.],\n",
      "       [3., 4.]], dtype=float32)>\n",
      "Shape:  (2, 2)\n",
      "DType:  <dtype: 'float32'>\n",
      "As NumPy:  <bound method BaseResourceVariable.numpy of <tf.Variable 'Variable:0' shape=(2, 2) dtype=float32, numpy=\n",
      "array([[1., 2.],\n",
      "       [3., 4.]], dtype=float32)>>\n"
     ]
    }
   ],
   "source": [
    "# 一些创建方式，可以非常多种多样，里面的内容也是\n",
    "my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n",
    "my_variable = tf.Variable(my_tensor)\n",
    "print(my_variable)\n",
    "\n",
    "# Variables can be all kinds of types, just like tensors\n",
    "bool_variable = tf.Variable([False, False, False, True])\n",
    "complex_variable = tf.Variable([5 + 4j, 6 + 1j])\n",
    "\n",
    "# 变量也可以进行相似的张量操作\n",
    "print(\"Shape: \",my_variable.shape)\n",
    "print(\"DType: \",my_variable.dtype)\n",
    "print(\"As NumPy: \", my_variable.numpy)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Copying and reshaping:  tf.Tensor([[1. 2. 3. 4.]], shape=(1, 4), dtype=float32)\n",
      "<tf.Variable 'Variable:0' shape=(2, 2) dtype=float32, numpy=\n",
      "array([[1., 2.],\n",
      "       [3., 4.]], dtype=float32)>\n"
     ]
    }
   ],
   "source": [
    "#变量无法更改形状\n",
    "print(\"\\nCopying and reshaping: \", tf.reshape(my_variable, ([1,4])))\n",
    "print(my_variable)#发现其实并没有reshape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第三小节 自动微分"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[自动微分](https://baike.baidu.com/item/%E8%87%AA%E5%8A%A8%E5%BE%AE%E5%88%86/1077470?fr=aladdin)对于实现机器学习算法（例如，用于训练神经网络的反向传播）非常有用。\n",
    "\n",
    "自动微分是将一个复杂的数学运算过程分解为一系列简单的基本运算， 每一项基本运算都可以通过查表得出来。自动微分有两种形式\n",
    "前向模式 (forward mode)\n",
    "反向模式 (reverse mode)\n",
    "\n",
    "前向模式的一次正向传播能够计算出输出值以及导数值， 而反向模式需要先进行正向传播计算出输出值， 然后进行反向传播计算导数值，所以反向模式的内存开销要大一点， 因为它需要保存正向传播中的中间变量值，这些变量值用于反向传播的时候计算导数。\n",
    "\n",
    "当输出的维度大于输入的时候，适宜使用前向模式微分；当输出维度远远小于输入的时候，适宜使用反向模式微分。\n",
    "从矩阵乘法次数的角度来看，前向模式和反向模式的不同之处在于矩阵相乘的起始之处不同。当输出维度小于输入维度，反向模式的乘法次数要小于前向模式。\n",
    "\n",
    "相关的数学知识看https://zhuanlan.zhihu.com/p/61103504"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:The dtype of the target tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32\n",
      "WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32\n"
     ]
    },
    {
     "ename": "AttributeError",
     "evalue": "'NoneType' object has no attribute 'numpy'",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mAttributeError\u001b[0m                            Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-46-0b8c0a84727c>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      6\u001b[0m \u001b[1;31m# dy = 2x * dx\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      7\u001b[0m \u001b[0mdy_dx\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtape\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mgradient\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0my\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 8\u001b[1;33m \u001b[0mdy_dx\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mnumpy\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[1;31mAttributeError\u001b[0m: 'NoneType' object has no attribute 'numpy'"
     ]
    }
   ],
   "source": [
    "# 示例\n",
    "x = tf.Variable(3)\n",
    "\n",
    "with tf.GradientTape() as tape:\n",
    "  y = x**2\n",
    "# dy = 2x * dx\n",
    "dy_dx = tape.gradient(y, x)\n",
    "dy_dx.numpy()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第二部分 tf基础实践"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "要进行 TensorFlow 机器学习，您可能需要定义、保存和恢复模型。\n",
    "\n",
    "抽象地说，模型是：\n",
    "\n",
    "- 一个在张量上进行某些计算的函数（**前向传递**）\n",
    "- 一些可以更新以响应训练的变量\n",
    "\n",
    "在本指南中，您将深入学习 Keras，了解如何定义 TensorFlow 模型。本文着眼于 TensorFlow 如何收集变量和模型，以及如何保存和恢复它们。\n",
    "\n",
    "注：如果您想立即开始使用 Keras，请参阅 [Keras 指南集合](./keras/)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第一小节 tf中的模型与层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datetime import datetime\n",
    "\n",
    "%load_ext tensorboard"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<tf.Variable 'train_me:0' shape=() dtype=float32, numpy=18.0>\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(), dtype=float32, numpy=96.0>"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# tf.module是一个基类，之后的所有model都是其实例化\n",
    "class SimpleModule(tf.Module):\n",
    "  def __init__(self, name=None):\n",
    "    super().__init__(name=name)\n",
    "    self.a_variable = tf.Variable(18.0, name=\"train_me\")\n",
    "    self.non_trainable_variable = tf.Variable(6.0, trainable=False, name=\"do_not_train_me\")\n",
    "  def __call__(self, x):\n",
    "    return self.a_variable * x + self.non_trainable_variable\n",
    "\n",
    "#实例化\n",
    "simple_module = SimpleModule(name=\"simple\")\n",
    "print(simple_module.a_variable)\n",
    "simple_module(tf.constant(5.0))#这是调用call"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "dense 是一个层，squential是一个模型，层组成模型,下面开始手写一个dense层并且写一个双dense层的model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model results: tf.Tensor([[2.3808782 5.7823787]], shape=(1, 2), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "#Dense layer手写！\n",
    "class Dense(tf.Module):\n",
    "  def __init__(self, in_features, out_features, name=None):\n",
    "    super().__init__(name=name)\n",
    "    self.w = tf.Variable(\n",
    "      tf.random.normal([in_features, out_features]), name='w')\n",
    "    self.b = tf.Variable(tf.zeros([out_features]), name='b')\n",
    "  def __call__(self, x):\n",
    "    y = tf.matmul(x, self.w) + self.b#input x扔进来，调用call,return y即可\n",
    "    return tf.nn.relu(y)#连一个激活函数\n",
    "\n",
    "# squential model手写，之后会用keras全部实现\n",
    "class SequentialModule(tf.Module):\n",
    "  def __init__(self, name=None):\n",
    "    super().__init__(name=name)\n",
    "\n",
    "    self.dense_1 = Dense(in_features=3, out_features=3)\n",
    "    self.dense_2 = Dense(in_features=3, out_features=2)\n",
    "\n",
    "  def __call__(self, x):#先定义两个序贯的dense层参数\n",
    "    x = self.dense_1(x)#然后x过第一层\n",
    "    return self.dense_2(x)#然后x过第二层并且返回\n",
    "\n",
    "# You have made a model!开始实例化SequentialModule\n",
    "my_model = SequentialModule(name=\"the_model\")\n",
    "\n",
    "# Call it, with random results\n",
    "print(\"Model random results:\", my_model(tf.constant([[2.0, 2.0, 2.0]])))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上面的手写过程中，需要实现指定input和output features 总共有几个，但是这样很麻烦啊，能不能直接自动知道上一层的output个数作为本层input个数？可以！直接用x.shape[-1]作为input个数即可。但是实际上keras已经全部实现了，因此不用管。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第二小节 tf的模型保存"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "模型重要的是两个东西，一个是函数(或者说结构），一个是权重，我们可以分别保存，也可以一起保存"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 我们先保存一下权重"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'my_checkpoint'"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#创建并保存\n",
    "chkp_path = \"my_checkpoint\"#你研训的时候见过的\n",
    "checkpoint = tf.train.Checkpoint(model=my_model)\n",
    "checkpoint.write(chkp_path)\n",
    "checkpoint.write(chkp_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 我们再保存一下函数(结构，这就涉及到了图）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorFlow 可以在不使用原始 Python 对象的情况下运行模型，如 [TensorFlow Serving](https://tensorflow.org/tfx) 和 [TensorFlow Lite](https://tensorflow.org/lite) 中所见，甚至当您从 [TensorFlow Hub](https://tensorflow.org/hub) 下载经过训练的模型时也是如此。\n",
    "\n",
    "TensorFlow 需要了解如何执行 Python 中描述的计算，但**不需要原始代码**。为此，您可以创建一个**计算图**，如上一篇[指南](./intro_to_graphs.ipynb)中所述。\n",
    "\n",
    "此计算图中包含实现函数的*运算*。\n",
    "\n",
    "您可以通过添加 `@tf.function` 装饰器在上面的模型中定义计算图，以指示此代码应作为计算图运行。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MySequentialModule(tf.Module):\n",
    "  def __init__(self, name=None):\n",
    "    super().__init__(name=name)\n",
    "\n",
    "    self.dense_1 = Dense(in_features=3, out_features=3)\n",
    "    self.dense_2 = Dense(in_features=3, out_features=2)\n",
    "\n",
    "  @tf.function\n",
    "  def __call__(self, x):\n",
    "    x = self.dense_1(x)\n",
    "    return self.dense_2(x)\n",
    "\n",
    "# You have made a model with a graph!\n",
    "my_model = MySequentialModule(name=\"the_model\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor([[1.5773563 0.       ]], shape=(1, 2), dtype=float32)\n",
      "WARNING:tensorflow:5 out of the last 5 calls to <function MySequentialModule.__call__ at 0x000001FC3A3ED700> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.\n",
      "tf.Tensor(\n",
      "[[[1.5773563 0.       ]\n",
      "  [1.5773563 0.       ]]], shape=(1, 2, 2), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "print(my_model([[2.0, 2.0, 2.0]]))#(3,1)的扔进来\n",
    "print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]]))#（3，2）的扔进来"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:6 out of the last 6 calls to <function MySequentialModule.__call__ at 0x000001FC3A3EDDC0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.\n",
      "tf.Tensor([[0.        2.7770422]], shape=(1, 2), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "# Set up logging.\n",
    "stamp = datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n",
    "logdir = \"logs/func/%s\" % stamp#地址\n",
    "writer = tf.summary.create_file_writer(logdir)\n",
    "\n",
    "# 新建一个模型\n",
    "new_model = MySequentialModule()\n",
    "\n",
    "# Bracket the function call with\n",
    "# tf.summary.trace_on() and tf.summary.trace_export().\n",
    "tf.summary.trace_on(graph=True, profiler=True)\n",
    "# Call only one tf.function when tracing.\n",
    "z = print(new_model(tf.constant([[2.0, 2.0, 2.0]])))\n",
    "\n",
    "\n",
    "\n",
    "with writer.as_default():\n",
    "  tf.summary.trace_export(\n",
    "      name=\"my_func_trace\",\n",
    "      step=0,\n",
    "      profiler_outdir=logdir)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "      <iframe id=\"tensorboard-frame-64c564a839dd94\" width=\"100%\" height=\"800\" frameborder=\"0\">\n",
       "      </iframe>\n",
       "      <script>\n",
       "        (function() {\n",
       "          const frame = document.getElementById(\"tensorboard-frame-64c564a839dd94\");\n",
       "          const url = new URL(\"/\", window.location);\n",
       "          const port = 6006;\n",
       "          if (port) {\n",
       "            url.port = port;\n",
       "          }\n",
       "          frame.src = url;\n",
       "        })();\n",
       "      </script>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "#docs_infra: no_execute\n",
    "%tensorboard --logdir logs/func"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 我们再保存一下模型 事实上，用keras很简单，不需要看这么多这个"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Assets written to: the_saved_model\\assets\n"
     ]
    }
   ],
   "source": [
    "tf.saved_model.save(my_model, \"the_saved_model\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [],
   "source": [
    "#再次加载\n",
    "new_model = tf.saved_model.load(\"the_saved_model\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "False"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#通过加载已保存模型创建的 `new_model` 是 TensorFlow 内部的用户对象，无需任何类知识。它不是 `SequentialModule` 类型的对象\n",
    "isinstance(new_model, SequentialModule)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor([[1.7429433 0.       ]], shape=(1, 2), dtype=float32)\n",
      "tf.Tensor(\n",
      "[[[1.5773563 0.       ]\n",
      "  [1.5773563 0.       ]]], shape=(1, 2, 2), dtype=float32)\n",
      "tf.Tensor(\n",
      "[[[1.5773562 0.       ]\n",
      "  [1.5773562 0.       ]\n",
      "  [1.5773562 0.       ]]], shape=(1, 3, 2), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "print(my_model([[4.0, 2.0, 2.0]]))\n",
    "print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]]))\n",
    "print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0],[2.0, 2.0, 2.0]]]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第三小节 如何load data？并初步处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from __future__ import absolute_import, division, print_function, unicode_literals"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pathlib\n",
    "import os\n",
    "import matplotlib.pyplot as plt\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "np.set_printoptions(precision=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "#搞列表\n",
    "dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])\n",
    "res = tf.data.Dataset.from_tensor_slices(([1,2,3],['a','b','c']))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(1, shape=(), dtype=int32)\n",
      "tf.Tensor(2, shape=(), dtype=int32)\n",
      "tf.Tensor(3, shape=(), dtype=int32)\n"
     ]
    }
   ],
   "source": [
    "for i in res:\n",
    "    print(i[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "搞图像的介绍\n",
    "\n",
    "调用 image_dataset_from_directory(main_directory, labels=‘inferred’) 将返回一个tf.data.Dataset, 该数据集从子目录class_a和class_b生成批次图像，同时生成标签0和1（0对应class_a，1对应class_b），\n",
    "\n",
    "支持的图像格式：jpeg, png, bmp, gif. 动图被截断到第一帧。\n",
    "\n",
    "参数\n",
    "directory: 数据所在目录。如果标签是inferred（默认），则它应该包含子目录，每个目录包含一个类的图像。否则，将忽略目录结构。\n",
    "\n",
    "labels: inferred（标签从目录结构生成），或者是整数标签的列表/元组，其大小与目录中找到的图像文件的数量相同。标签应根据图像文件路径的字母顺序排序（通过Python中的os.walk(directory)获得）。\n",
    "\n",
    "label_mode:\n",
    "    int：表示标签被编码成整数（例如：sparse_categorical_crossentropy loss）。\n",
    "    categorical：指标签被编码为分类向量（例如：categorical_crossentropy loss）。\n",
    "    binary：意味着标签（只能有2个）被编码为值为0或1的float32标量（例如：binary_crossentropy）。\n",
    "    None：（无标签）。\n",
    "    \n",
    "class_names: 仅当labels为inferred时有效。这是类名称的明确列表（必须与子目录的名称匹配）。用于控制类的顺序（否则使用字母数字顺序）。\n",
    "\n",
    "color_mode: grayscale、rgb、rgba之一。默认值：rgb。图像将被转换为1、3或者4通道。\n",
    "\n",
    "batch_size: 数据批次的大小。默认值：32\n",
    "\n",
    "image_size: 从磁盘读取数据后将其重新调整大小。默认：（256，256）。由于管道处理的图像批次必须具有相同的大小，因此该参数必须提供。\n",
    "\n",
    "shuffle: 是否打乱数据。默认值：True。如果设置为False,则按字母数字顺序对数据进行排序。\n",
    "\n",
    "seed: 用于shuffle和转换的可选随机种子。\n",
    "\n",
    "validation_split: 0和1之间的可选浮点数，可保留一部分数据用于验证。\n",
    "\n",
    "subset: training或validation之一。仅在设置validation_split时使用。\n",
    "\n",
    "interpolation: 字符串,当调整图像大小时使用的插值方法。默认为：bilinear。支持bilinear, nearest, bicubic, area, lanczos3, lanczos5, gaussian, mitchellcubic。\n",
    "\n",
    "follow_links: 是否访问符号链接指向的子目录。默认：False。\n",
    "\n",
    "Returns\n",
    "一个tf.data.Dataset对象。\n",
    "\n",
    "如果label_mode为None，它将生成float32张量，其shape为（batch_size, image_size[0], image_size(1), num_channels）,并对图像进行编码。\n",
    "否则，将生成一个元组（images, labels），其中图像的shape为（batch_size, image_size[0], image_size(1), num_channels），并且labels遵循下面描述的格式。\n",
    "关于labels格式规则：\n",
    "\n",
    "如果label_mode 是 int, labels是形状为（batch_size, ）的int32张量\n",
    "；如果label_mode 是 binary, labels是形状为（batch_size, 1）的1和0的float32张量。\n",
    "；如果label_mode 是 categorial, labels是形状为（batch_size, num_classes）的float32张量，表示类索引的one-hot编码。\n",
    "\n",
    "有关生成图像中通道数量的规则：\n",
    "\n",
    "如果color_mode 是 grayscale, 图像张量有1个通道。\n",
    "如果color_mode 是 rgb, 图像张量有3个通道。\n",
    "如果color_mode 是 rgba, 图像张量有4个通道。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Found 2469 files belonging to 24 classes.\n",
      "Using 1976 files for training.\n",
      "Found 2469 images belonging to 24 classes.\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(32, 24)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#搞图像\n",
    "data_dir = pathlib.Path('C:/Users/Han/Desktop/4.1/大数据与机器智能/作业/第四次大作业附件/dataset/dataset/') \n",
    "train_ds = tf.keras.preprocessing.image_dataset_from_directory(\n",
    "    data_dir,\n",
    "    validation_split=0.2,\n",
    "    subset=\"training\",\n",
    "    seed=123,\n",
    "    image_size=(180,180),\n",
    "    batch_size=32) \n",
    "img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)\n",
    "images, labels = next(img_gen.flow_from_directory(data_dir))\n",
    "labels.shape#(batch size,num_class)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "   survived     sex   age  n_siblings_spouses  parch     fare  class     deck  \\\n",
      "0         0    male  22.0                   1      0   7.2500  Third  unknown   \n",
      "1         1  female  38.0                   1      0  71.2833  First        C   \n",
      "2         1  female  26.0                   0      0   7.9250  Third  unknown   \n",
      "3         1  female  35.0                   1      0  53.1000  First        C   \n",
      "4         0    male  28.0                   0      0   8.4583  Third  unknown   \n",
      "\n",
      "   embark_town alone  \n",
      "0  Southampton     n  \n",
      "1    Cherbourg     n  \n",
      "2  Southampton     y  \n",
      "3  Southampton     n  \n",
      "4   Queenstown     y  \n",
      "'survived': [1 1 0 0 0]\n",
      "features:\n",
      "  'sex'               : [b'male' b'female' b'male' b'male' b'male']\n",
      "  'age'               : [36. 34. 35. 47. 28.]\n",
      "  'n_siblings_spouses': [1 0 0 0 0]\n",
      "  'parch'             : [2 0 0 0 0]\n",
      "  'fare'              : [120.    13.    26.    38.5    8.05]\n",
      "  'class'             : [b'First' b'Second' b'Second' b'First' b'Third']\n",
      "  'deck'              : [b'B' b'unknown' b'unknown' b'E' b'unknown']\n",
      "  'embark_town'       : [b'Southampton' b'Southampton' b'Southampton' b'Southampton'\n",
      " b'Southampton']\n",
      "  'alone'             : [b'n' b'y' b'y' b'y' b'y']\n"
     ]
    }
   ],
   "source": [
    "#搞表格\n",
    "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n",
    "df = pd.read_csv(titanic_file, index_col=None)\n",
    "print(df.head())\n",
    "titanic_batches = tf.data.experimental.make_csv_dataset(\n",
    "    titanic_file, batch_size=5,\n",
    "    label_name=\"survived\")\n",
    "for feature_batch, label_batch in titanic_batches.take(1):\n",
    "    print(\"'survived': {}\".format(label_batch))\n",
    "    print(\"features:\")\n",
    "    for key, value in feature_batch.items():\n",
    "        print(\"  {!r:20s}: {}\".format(key, value))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0\n",
      "1\n",
      "0\n",
      "2\n",
      "8\n",
      "1\n"
     ]
    }
   ],
   "source": [
    "#打乱数据\n",
    "dataset = tf.data.Dataset.from_tensor_slices([0, 0, 1, 8, 2, 1])\n",
    "dataset = dataset.shuffle(buffer_size=2)#只要size不是1都行\n",
    "for elem in dataset:\n",
    "    print(elem.numpy())#这是转成numpy了，不是tensor了"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "处理类别不平衡数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "'survived': [1 0 1 1 0 0 1 0 1 0]\n",
      "features:\n",
      "  'sex'               : [b'female' b'male' b'female' b'female' b'male' b'male' b'female' b'male'\n",
      " b'female' b'male']\n",
      "  'age'               : [28.  4. 36. 28. 24. 39. 19. 36. 24. 31.]\n",
      "  'n_siblings_spouses': [0 4 1 0 0 0 0 0 0 0]\n",
      "  'parch'             : [0 1 0 0 0 0 0 0 2 0]\n",
      "  'fare'              : [ 7.75   29.125  17.4    12.35   13.      0.     30.      7.4958 14.5\n",
      "  7.775 ]\n",
      "  'class'             : [b'Third' b'Third' b'Third' b'Second' b'Second' b'First' b'First' b'Third'\n",
      " b'Second' b'Third']\n",
      "  'deck'              : [b'unknown' b'unknown' b'unknown' b'E' b'unknown' b'A' b'B' b'unknown'\n",
      " b'unknown' b'unknown']\n",
      "  'embark_town'       : [b'Queenstown' b'Queenstown' b'Southampton' b'Queenstown' b'Southampton'\n",
      " b'Southampton' b'Southampton' b'Southampton' b'Southampton'\n",
      " b'Southampton']\n",
      "  'alone'             : [b'y' b'n' b'n' b'y' b'y' b'y' b'y' b'y' b'n' b'y']\n"
     ]
    }
   ],
   "source": [
    "#还是以泰坦尼克的数据为例\n",
    "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n",
    "titanic_batches = tf.data.experimental.make_csv_dataset(\n",
    "    titanic_file, batch_size=10,\n",
    "    label_name=\"survived\")\n",
    "for feature_batch, label_batch in titanic_batches.take(1):\n",
    "    print(\"'survived': {}\".format(label_batch))\n",
    "    print(\"features:\")\n",
    "    for key, value in feature_batch.items():\n",
    "        print(\"  {!r:20s}: {}\".format(key, value))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "#定义一个数label的函数\n",
    "def count(counts, batch):\n",
    "  features, labels = batch\n",
    "  class_1 = labels == 1\n",
    "  class_1 = tf.cast(class_1, tf.int32)\n",
    "\n",
    "  class_0 = labels == 0\n",
    "  class_0 = tf.cast(class_0, tf.int32)\n",
    "\n",
    "  counts['class_0'] += tf.reduce_sum(class_0)\n",
    "  counts['class_1'] += tf.reduce_sum(class_1)\n",
    "\n",
    "  return counts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0.65 0.35]\n"
     ]
    }
   ],
   "source": [
    "#输出两个label的比值\n",
    "counts = titanic_batches.take(10).reduce(\n",
    "    initial_state={'class_0': 0, 'class_1': 0},\n",
    "    reduce_func = count)\n",
    "\n",
    "counts = np.array([counts['class_0'].numpy(),\n",
    "                   counts['class_1'].numpy()]).astype(np.float32)\n",
    "\n",
    "fractions = counts/counts.sum()\n",
    "print(fractions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0 0 0 0 0 0 0 0 0 0]\n"
     ]
    }
   ],
   "source": [
    "#处理的第一步，分成正负label的\n",
    "negative_ds = (\n",
    "  titanic_batches\n",
    "    .unbatch()\n",
    "    .filter(lambda features, label: label==0)\n",
    "    .repeat())\n",
    "positive_ds = (\n",
    "  titanic_batches\n",
    "    .unbatch()\n",
    "    .filter(lambda features, label: label==1)\n",
    "    .repeat())\n",
    "for features, label in negative_ds.batch(10).take(1):\n",
    "  print(label.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1 1 0 1 1 0 1 1 1 0]\n",
      "[1 1 0 0 1 1 1 0 1 1]\n",
      "[0 0 0 1 0 1 1 0 0 0]\n"
     ]
    }
   ],
   "source": [
    "#处理第二步，给均衡的比例\n",
    "balanced_ds = tf.data.experimental.sample_from_datasets(\n",
    "    [negative_ds, positive_ds], [0.5, 0.5]).batch(10)\n",
    "for features, labels in balanced_ds.take(3):\n",
    "  print(labels.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第三部分 用keras实战"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第一小节 Sequential model 基础\n",
    "When to use a Sequential model\n",
    "\n",
    "A `Sequential` model is appropriate for **a plain stack of layers**\n",
    "where each layer has **exactly one input tensor and one output tensor**.\n",
    "\n",
    "A Sequential model is **not appropriate** when:\n",
    "\n",
    "- Your model has multiple inputs or multiple outputs\n",
    "- Any of your layers has multiple inputs or multiple outputs\n",
    "- You need to do layer sharing\n",
    "- You want non-linear topology (e.g. a residual connection, a multi-branch\n",
    "model)\n",
    "\n",
    "Schematically, the following `Sequential` model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "from tensorflow.keras import layers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(3, 4), dtype=float32, numpy=\n",
       "array([[ 0.1687, -0.0047, -0.148 , -0.2176],\n",
       "       [ 0.1687, -0.0047, -0.148 , -0.2176],\n",
       "       [ 0.1687, -0.0047, -0.148 , -0.2176]], dtype=float32)>"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Define Sequential model with 3 layers\n",
    "model = keras.Sequential(\n",
    "    [\n",
    "        layers.Dense(2, activation=\"relu\", name=\"layer1\"),\n",
    "        layers.Dense(3, activation=\"relu\", name=\"layer2\"),\n",
    "        layers.Dense(4, name=\"layer3\"),\n",
    "    ]\n",
    ")\n",
    "#或者下面的add方式也可，其实更推荐，因为涉及到调试的阶段\n",
    "model = keras.Sequential(name=\"my_sequential\")\n",
    "model.add(layers.Dense(2, activation=\"relu\", name=\"layer1\"))\n",
    "model.add(layers.Dense(3, activation=\"relu\", name=\"layer2\"))\n",
    "model.add(layers.Dense(4, name=\"layer3\"))\n",
    "# Call model on a test input\n",
    "x = tf.ones((3, 3))\n",
    "y = model(x)\n",
    "y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<tensorflow.python.keras.layers.core.Dense at 0x21500038190>,\n",
       " <tensorflow.python.keras.layers.core.Dense at 0x2150002ba90>,\n",
       " <tensorflow.python.keras.layers.core.Dense at 0x2150002b9a0>]"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#查看我的层结构\n",
    "model.layers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对于每个层的weights，在没有数据进入训练的时候，没法迭代，也没有weights，但是训练之后，我们可以查看\n",
    "当然，在模型搭建阶段，我们也可以直接定义Input的形状来获得weight，参考下一块儿debug部分"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of weights after calling the model: 6\n"
     ]
    }
   ],
   "source": [
    "model = keras.Sequential(\n",
    "    [\n",
    "        layers.Dense(2, activation=\"relu\"),\n",
    "        layers.Dense(3, activation=\"relu\"),\n",
    "        layers.Dense(4),\n",
    "    ]\n",
    ")  # No weights at this stage!\n",
    "\n",
    "# At this point, you can't do this:\n",
    "# model.weights\n",
    "\n",
    "# You also can't do this:\n",
    "# model.summary()\n",
    "\n",
    "# Call the model on a test input\n",
    "x = tf.ones((1, 4))\n",
    "y = model(x)\n",
    "print(\"Number of weights after calling the model:\", len(model.weights))  # 6"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<tf.Variable 'dense/kernel:0' shape=(4, 2) dtype=float32, numpy=\n",
       " array([[-0.7806,  0.1718],\n",
       "        [ 0.0931,  0.3389],\n",
       "        [ 0.8664, -0.4707],\n",
       "        [-0.3865,  0.1477]], dtype=float32)>,\n",
       " <tf.Variable 'dense/bias:0' shape=(2,) dtype=float32, numpy=array([0., 0.], dtype=float32)>,\n",
       " <tf.Variable 'dense_1/kernel:0' shape=(2, 3) dtype=float32, numpy=\n",
       " array([[ 0.4125, -0.9624, -0.7256],\n",
       "        [-0.0401,  0.2733, -0.9987]], dtype=float32)>,\n",
       " <tf.Variable 'dense_1/bias:0' shape=(3,) dtype=float32, numpy=array([0., 0., 0.], dtype=float32)>,\n",
       " <tf.Variable 'dense_2/kernel:0' shape=(3, 4) dtype=float32, numpy=\n",
       " array([[-0.479 ,  0.9101, -0.4196, -0.9074],\n",
       "        [-0.3785, -0.7006,  0.6675, -0.0952],\n",
       "        [-0.0019, -0.8642,  0.7156, -0.806 ]], dtype=float32)>,\n",
       " <tf.Variable 'dense_2/bias:0' shape=(4,) dtype=float32, numpy=array([0., 0., 0., 0.], dtype=float32)>]"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.weights#主要是kernel和bias两个weight"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_2\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "dense (Dense)                (1, 2)                    10        \n",
      "_________________________________________________________________\n",
      "dense_1 (Dense)              (1, 3)                    9         \n",
      "_________________________________________________________________\n",
      "dense_2 (Dense)              (1, 4)                    16        \n",
      "=================================================================\n",
      "Total params: 35\n",
      "Trainable params: 35\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model.summary()#查看相关结构和形状"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "搭模型阶段，需要不停看结构，debug，下面的例子是搭建模型的时候debug的方式，用add()+summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_6\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "conv2d_18 (Conv2D)           (None, 123, 123, 32)      2432      \n",
      "_________________________________________________________________\n",
      "conv2d_19 (Conv2D)           (None, 121, 121, 32)      9248      \n",
      "_________________________________________________________________\n",
      "max_pooling2d_9 (MaxPooling2 (None, 40, 40, 32)        0         \n",
      "=================================================================\n",
      "Total params: 11,680\n",
      "Trainable params: 11,680\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n",
      "************************************************************************************\n",
      "Model: \"sequential_6\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "conv2d_18 (Conv2D)           (None, 123, 123, 32)      2432      \n",
      "_________________________________________________________________\n",
      "conv2d_19 (Conv2D)           (None, 121, 121, 32)      9248      \n",
      "_________________________________________________________________\n",
      "max_pooling2d_9 (MaxPooling2 (None, 40, 40, 32)        0         \n",
      "_________________________________________________________________\n",
      "conv2d_20 (Conv2D)           (None, 38, 38, 32)        9248      \n",
      "_________________________________________________________________\n",
      "conv2d_21 (Conv2D)           (None, 36, 36, 32)        9248      \n",
      "_________________________________________________________________\n",
      "max_pooling2d_10 (MaxPooling (None, 12, 12, 32)        0         \n",
      "_________________________________________________________________\n",
      "conv2d_22 (Conv2D)           (None, 10, 10, 32)        9248      \n",
      "_________________________________________________________________\n",
      "conv2d_23 (Conv2D)           (None, 8, 8, 32)          9248      \n",
      "_________________________________________________________________\n",
      "max_pooling2d_11 (MaxPooling (None, 4, 4, 32)          0         \n",
      "=================================================================\n",
      "Total params: 48,672\n",
      "Trainable params: 48,672\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n",
      "************************************************************************************\n"
     ]
    }
   ],
   "source": [
    "model = keras.Sequential()\n",
    "model.add(keras.Input(shape=(250, 250, 3)))  # 250x250 RGB images\n",
    "model.add(layers.Conv2D(32, 5, strides=2, activation=\"relu\"))\n",
    "model.add(layers.Conv2D(32, 3, activation=\"relu\"))\n",
    "model.add(layers.MaxPooling2D(3))\n",
    "\n",
    "# Can you guess what the current output shape is at this point? Probably not.\n",
    "# Let's just print it:\n",
    "model.summary()\n",
    "print('************************************************************************************')\n",
    "# The answer was: (40, 40, 32), so we can keep downsampling...\n",
    "\n",
    "model.add(layers.Conv2D(32, 3, activation=\"relu\"))\n",
    "model.add(layers.Conv2D(32, 3, activation=\"relu\"))\n",
    "model.add(layers.MaxPooling2D(3))\n",
    "model.add(layers.Conv2D(32, 3, activation=\"relu\"))\n",
    "model.add(layers.Conv2D(32, 3, activation=\"relu\"))\n",
    "model.add(layers.MaxPooling2D(2))\n",
    "\n",
    "# And now?\n",
    "model.summary()\n",
    "print('************************************************************************************')\n",
    "# Now that we have 4x4 feature maps, time to apply global max pooling.\n",
    "model.add(layers.GlobalMaxPooling2D())\n",
    "\n",
    "# Finally, we add a classification layer.\n",
    "model.add(layers.Dense(10))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "每个层都有输入和输出的特征，我们可以通过下面的方式提取中间层的特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<tf.Tensor: shape=(1, 123, 123, 32), dtype=float32, numpy=\n",
       " array([[[[0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          ...,\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ]],\n",
       " \n",
       "         [[0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          ...,\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ]],\n",
       " \n",
       "         [[0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          ...,\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ]],\n",
       " \n",
       "         ...,\n",
       " \n",
       "         [[0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          ...,\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ]],\n",
       " \n",
       "         [[0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          ...,\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ]],\n",
       " \n",
       "         [[0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          ...,\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ],\n",
       "          [0.    , 0.    , 0.1082, ..., 0.588 , 0.2734, 0.    ]]]],\n",
       "       dtype=float32)>,\n",
       " <tf.Tensor: shape=(1, 121, 121, 32), dtype=float32, numpy=\n",
       " array([[[[0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          ...,\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167]],\n",
       " \n",
       "         [[0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          ...,\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167]],\n",
       " \n",
       "         [[0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          ...,\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167]],\n",
       " \n",
       "         ...,\n",
       " \n",
       "         [[0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          ...,\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167]],\n",
       " \n",
       "         [[0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          ...,\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167]],\n",
       " \n",
       "         [[0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          ...,\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167],\n",
       "          [0.    , 0.0346, 0.    , ..., 0.    , 0.    , 0.1167]]]],\n",
       "       dtype=float32)>,\n",
       " <tf.Tensor: shape=(1, 119, 119, 32), dtype=float32, numpy=\n",
       " array([[[[0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          ...,\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061]],\n",
       " \n",
       "         [[0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          ...,\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061]],\n",
       " \n",
       "         [[0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          ...,\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061]],\n",
       " \n",
       "         ...,\n",
       " \n",
       "         [[0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          ...,\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061]],\n",
       " \n",
       "         [[0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          ...,\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061]],\n",
       " \n",
       "         [[0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          ...,\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061],\n",
       "          [0.0817, 0.5822, 0.4748, ..., 0.178 , 0.    , 0.1061]]]],\n",
       "       dtype=float32)>]"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "initial_model = keras.Sequential(\n",
    "    [\n",
    "        keras.Input(shape=(250, 250, 3)),\n",
    "        layers.Conv2D(32, 5, strides=2, activation=\"relu\"),\n",
    "        layers.Conv2D(32, 3, activation=\"relu\"),\n",
    "        layers.Conv2D(32, 3, activation=\"relu\"),\n",
    "    ]\n",
    ")\n",
    "feature_extractor = keras.Model(\n",
    "    inputs=initial_model.inputs,\n",
    "    outputs=[layer.output for layer in initial_model.layers],\n",
    ")\n",
    "\n",
    "# Call feature extractor on test input.\n",
    "x = tf.ones((1, 250, 250, 3))\n",
    "features = feature_extractor(x)\n",
    "len(features)\n",
    "features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第二小节 使用内置的方法进行Model的训练和评估\n",
    "本指南涵盖使用内置 API 进行训练和验证时的训练、评估和预测（推断）模型（例如 Model.fit()、Model.evaluate() 和 Model.predict()）。\n",
    "\n",
    "如果您有兴趣从头开始编写自己的训练和评估循环，请参阅**从头开始编写训练循环指南**。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "from tensorflow.keras import layers\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "conv2d (Conv2D)              (None, 12, 12, 32)        832       \n",
      "_________________________________________________________________\n",
      "conv2d_1 (Conv2D)            (None, 10, 10, 32)        9248      \n",
      "_________________________________________________________________\n",
      "max_pooling2d (MaxPooling2D) (None, 3, 3, 32)          0         \n",
      "_________________________________________________________________\n",
      "flatten (Flatten)            (None, 288)               0         \n",
      "_________________________________________________________________\n",
      "dense (Dense)                (None, 128)               36992     \n",
      "_________________________________________________________________\n",
      "dense_1 (Dense)              (None, 10)                1290      \n",
      "_________________________________________________________________\n",
      "softmax (Softmax)            (None, 10)                0         \n",
      "=================================================================\n",
      "Total params: 48,362\n",
      "Trainable params: 48,362\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "#第一步，先构建model\n",
    "model = keras.Sequential()\n",
    "model.add(keras.Input(shape=(28, 28,1)))\n",
    "model.add(layers.Conv2D(32, 5, strides=2, activation=\"relu\"))\n",
    "model.add(layers.Conv2D(32, 3, activation=\"relu\"))\n",
    "model.add(layers.MaxPooling2D(3))\n",
    "model.add(layers.Flatten())\n",
    "model.add(layers.Dense(128,activation='relu'))\n",
    "model.add(layers.Dense(10))\n",
    "model.add(layers.Softmax())\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 第二步，准备data\n",
    "(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()#这里是numpy\n",
    "# Preprocess the data (these are NumPy arrays)\n",
    "x_train = x_train.reshape(60000, 28,28,1).astype(\"float32\") / 255\n",
    "x_test = x_test.reshape(10000, 28,28,1).astype(\"float32\") / 255\n",
    "y_train = y_train.astype(\"float32\")\n",
    "y_test = y_test.astype(\"float32\")\n",
    "# Reserve 10,000 samples for validation\n",
    "x_val = x_train[-10000:]\n",
    "y_val = y_train[-10000:]\n",
    "x_train = x_train[:-10000]\n",
    "y_train = y_train[:-10000]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "#第三步，我们指定训练配置（优化器、损失、指标）：\n",
    "model.compile(\n",
    "    optimizer=keras.optimizers.RMSprop(),  # Optimizer\n",
    "    # Loss function to minimize\n",
    "    loss=keras.losses.SparseCategoricalCrossentropy(),\n",
    "    # List of metrics to monitor\n",
    "    metrics=[keras.metrics.SparseCategoricalAccuracy()],\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Fit model on training data\n",
      "Epoch 1/2\n",
      "782/782 [==============================] - 16s 18ms/step - loss: 0.4814 - sparse_categorical_accuracy: 0.8523 - val_loss: 0.1443 - val_sparse_categorical_accuracy: 0.9561\n",
      "Epoch 2/2\n",
      "782/782 [==============================] - 11s 15ms/step - loss: 0.0757 - sparse_categorical_accuracy: 0.9772 - val_loss: 0.0566 - val_sparse_categorical_accuracy: 0.9835\n"
     ]
    }
   ],
   "source": [
    "#第四步，训练。我们调用 fit()，它会通过将数据切分成大小为 batch_size 的“批次”，然后在给定数量的 epochs 内重复遍历整个数据集来训练模型。\n",
    "#这个阶段可能有很多model shape的bug，还是要结合model定义那部分多看看summary\n",
    "print(\"Fit model on training data\")\n",
    "history = model.fit(\n",
    "    x_train,\n",
    "    y_train,\n",
    "    batch_size=64,\n",
    "    epochs=2,\n",
    "    # We pass some validation for\n",
    "    # monitoring validation loss and metrics\n",
    "    # at the end of each epoch\n",
    "    validation_data=(x_val, y_val),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'loss': [0.22085869312286377, 0.06945102661848068],\n",
       " 'sparse_categorical_accuracy': [0.9326000213623047, 0.9789800047874451],\n",
       " 'val_loss': [0.14430679380893707, 0.056623972952365875],\n",
       " 'val_sparse_categorical_accuracy': [0.9560999870300293, 0.9835000038146973]}"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "history.history"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Evaluate on test data\n",
      "79/79 [==============================] - 1s 10ms/step - loss: 0.0465 - sparse_categorical_accuracy: 0.9859\n",
      "test loss, test acc: [0.04647360369563103, 0.9858999848365784]\n",
      "predictions shape: (1, 10)\n",
      "predictions [[9.9986088e-01 2.0294386e-09 8.2699735e-05 2.3915230e-09 1.1129960e-08\n",
      "  1.8024716e-08 5.4266595e-05 6.4837110e-08 9.1986652e-07 1.0902444e-06]]\n",
      "pre_result 0\n",
      "real_result 0\n"
     ]
    }
   ],
   "source": [
    "#第五步，评估模型\n",
    "# 测试集上看\n",
    "print(\"Evaluate on test data\")\n",
    "results = model.evaluate(x_test, y_test, batch_size=128)\n",
    "print(\"test loss, test acc:\", results)\n",
    "\n",
    "# 新数据上看(选测试集第3个),这里用3：4表示\n",
    "predictions = model.predict(x_test[3:4])\n",
    "print(\"predictions shape:\", predictions.shape)\n",
    "print('predictions',predictions)\n",
    "print('pre_result',np.argmax(predictions))#这是转化为标签了\n",
    "print('real_result',int(y_test[3:4]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第三小节 进行个性化操作以及高阶使用"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "自定义损失和指标\n",
    "\n",
    "tf data和numpy的联通\n",
    "\n",
    "多输入多输出\n",
    "\n",
    "回调\n",
    "\n",
    "https://tensorflow.google.cn/guide/keras/train_and_evaluate"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里的内容有需要直接看吧，就不写了"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第四小节 保存模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### ***对于保存和加载的简短回答\n",
    "\n",
    "如果您只有 10 秒钟来阅读本指南，则您只需了解以下内容。\n",
    "\n",
    "**保存 Keras 模型：**\n",
    "\n",
    "```python\n",
    "model = ...  # Get model (Sequential, Functional Model, or Model subclass) model.save('path/to/location')\n",
    "```\n",
    "\n",
    "**将模型加载回来：**\n",
    "\n",
    "```python\n",
    "from tensorflow import keras model = keras.models.load_model('path/to/location')\n",
    "```\n",
    "\n",
    "现在，我们来看看详细内容。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "from tensorflow import keras"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Assets written to: my_model\\assets\n"
     ]
    }
   ],
   "source": [
    "# Calling `save('my_model')` creates a SavedModel folder `my_model`.\n",
    "model.save(\"my_model\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# It can be used to reconstruct the model identically.\n",
    "reconstructed_model = keras.models.load_model(\"my_model\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's check:\n",
    "np.testing.assert_allclose(\n",
    "    model.predict(x_test[3:4]), reconstructed_model.predict(x_test[3:4])\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1563/1563 [==============================] - 16s 10ms/step - loss: 0.0531 - sparse_categorical_accuracy: 0.9839\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x24f015ee580>"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# The reconstructed model is already compiled and has retained the optimizer\n",
    "# state, so training can resume:\n",
    "reconstructed_model.fit(x_train, y_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第五小节 一些链接汇总"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|  时间段 |     内容     |  讲课 / 实践   |  分工   |   备注   |\n",
    "| :----- | :----------------: | :-----------: | :----------: | :---------- |\n",
    "|   1  |  Keras顺序模型    |  讲课，实践   |   GM   |  https://tensorflow.google.cn/guide/keras/sequential_model   |\n",
    "|   2  |  Keras层和模型    |  讲课，实践   |   XHG   |  https://tensorflow.google.cn/guide/keras/custom_layers_and_models  |\n",
    "|   3  |  Keras模型的保存和加载  |  讲课，实践  |   GYN  |  https://tensorflow.google.cn/guide/keras/save_and_serialize  |\n",
    "|   4  |  Keras函数式API   |   实践   |  ZZY  |  https://tensorflow.google.cn/guide/keras/functional   |\n",
    "|   5  |  Keras训练和评估   |   实践   |  ZZY  |  https://tensorflow.google.cn/guide/keras/train_and_evaluate   |\n",
    "|   6  |  Keras训练循环    |   实践   |   GM   |  https://tensorflow.google.cn/guide/keras/writing_a_training_loop_from_scratch   |\n",
    "|   7  |  Keras遮盖与填充   |   实践   |   GM  |  https://tensorflow.google.cn/guide/keras/masking_and_padding  |\n",
    "|   8  |  Keras定制fit()内部操作 |  实践   |  CZ  |  https://tensorflow.google.cn/guide/keras/customizing_what_happens_in_fit  |\n",
    "|   9  |  Keras迁移学习与细调   |  实践   |  CZ  |  https://tensorflow.google.cn/guide/keras/transfer_learning   |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第四部分 用keras来进行实际项目操作"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第一小节 项目汇总"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### CNN\n",
    "|时间段 |    内容    |   讲课 / 实践     |    分工  |  备注       |\n",
    "| :---  |   :---------------:  |  :----------:   |    :----:    | :--- |\n",
    "|   3   | [TensorFlow2-卷积网络]()   | 讲课 |  CZ   |   https://tensorflow.google.cn/tutorials/images/cnn   |\n",
    "|   3   | [TensorFlow2-图像分类（Keras高级）]()   | 讲课 |  ZZY   |  https://tensorflow.google.cn/tutorials/images/classification   \n",
    "|   3   | [TensorFlow2-tf.data和加载图片]()   | 讲课 |  ZY   |  https://tensorflow.google.cn/guide/data   |\n",
    "|   3   | [TensorFlow2-图像分类（数据增强）]()   | 讲课 |  GM   |  https://tensorflow.google.cn/tutorials/images/data_augmentation   |\n",
    "|   3   | [TensorFlow2-图像分类（迁移学习）]()   | 讲课 |  GM   |  https://tensorflow.google.cn/tutorials/images/transfer_learning   |\n",
    "|   3   | [TensorFlow2-图像分割（图像分割）]()   | 讲课 |  GM   |  https://tensorflow.google.cn/tutorials/images/segmentation   |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### DNN\n",
    "|时间段 |    内容    |   讲课 / 实践     |    分工  |  备注       |\n",
    "| :---  |   :---------------:  |  :----------:   |    :----:    | :--- |\n",
    "|   1   | [TensorFlow2-Dense网络]()   | 讲课 |  CZ   |  https://tensorflow.google.cn/api_docs/python/tf/keras/layers/Dense     |\n",
    "|   1   | [TensorFlow2-基本回归：预测燃油效率（Keras）]()   | 讲课 |  ZZF   |  https://tensorflow.google.cn/tutorials/keras/regression |\n",
    "|   2   | [TensorFlow2-基本图像分类（Keras）]()   | 讲课 |  ZZF   |  https://tensorflow.google.cn/tutorials/keras/classification |\n",
    "|   3   | [TensorFlow2-简单语音识别（Keras）]()   | 讲课 |  ZZF   |  https://tensorflow.google.cn/tutorials/audio/simple_audio   |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### RNN\n",
    "|时间段 |    内容    |   讲课 / 实践     |    分工  |  备注       |\n",
    "| :---  |   :---------------  |  :----------:   |    :----:    | :--- |\n",
    "|   1   | [TensorFlow2-keras-循环网络]()   | 讲课 | ZZY  |  https://tensorflow.google.cn/guide/keras/rnn      |\n",
    "|   2   | [TensorFlow2-单词嵌入（Keras）]()   | 讲课 |   GM   |  https://tensorflow.google.cn/tutorials/text/word_embeddings    |\n",
    "|   3   | [TensorFlow2-文本分类（Keras）]()   | 讲课 |  GYN   |  https://tensorflow.google.cn/tutorials/text/text_classification_rnn   |\n",
    "|   4   | [TensorFlow2-文本生成（Keras）]()   | 讲课 |  XHG   |  https://tensorflow.google.cn/tutorials/text/text_generation   |\n",
    "|   5   | [TensorFlow2-音乐生成（Keras）]()   | 讲课 |  ZZF   |  https://tensorflow.google.cn/tutorials/audio/music_generation   |\n",
    "|   6   | [TensorFlow2-机器翻译的注意力机制（Keras）]()   | 讲课 |  PYY   |  https://tensorflow.google.cn/tutorials/text/nmt_with_attention   |\n",
    "|   7   | [TensorFlow2-图像注解（Keras）]()   | 讲课 |  GM   |  https://tensorflow.google.cn/tutorials/text/image_captioning   |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### transformer\n",
    "|时间段 |    内容    |   讲课 / 实践     |    分工  |  备注       |\n",
    "| :---  |   :---------------:  |  :----------:   |    :----:    | :--- |\n",
    "|   8   | [TensorFlow2-语言理解Transformer]()   | 讲课 |  ZZF   |  https://tensorflow.google.cn/tutorials/text/transformer   |\n",
    "|   9   | [TensorFlow2-BERT模型]()   | 讲课 |  ZZF   |  https://tensorflow.google.cn/text/tutorials/fine_tune_bert   |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第二小节 图像识别"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "之后补一下助教老师发的牛逼的那个"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第三小节 RNN"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "循环神经网络 (RNN) 是一类神经网络，它们在序列数据（如时间序列或自然语言）建模方面非常强大。\n",
    "\n",
    "简单来说，RNN 层会使用 `for` 循环对序列的时间步骤进行迭代，同时维持一个内部状态，对截至目前所看到的时间步骤信息进行编码。\n",
    "\n",
    "Keras RNN API 的设计重点如下：\n",
    "\n",
    "- **易于使用**：您可以使用内置 `keras.layers.RNN`、`keras.layers.LSTM` 和 `keras.layers.GRU` 层快速构建循环模型，而无需进行艰难的配置选择。\n",
    "\n",
    "- **易于自定义**：您还可以通过自定义行为来定义您自己的 RNN 单元层（`for` 循环的内部），并将其用于通用的 `keras.layers.RNN` 层（`for` 循环本身）。这使您能够以最少的代码和灵活的方式快速为不同研究思路设计原型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "from tensorflow.keras import layers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Keras 中有三种内置 RNN 层：\n",
    "\n",
    "1. `keras.layers.SimpleRNN`，一个全连接 RNN，其中前一个时间步骤的输出会被馈送至下一个时间步骤。\n",
    "\n",
    "2. `keras.layers.GRU`，最初由 [Cho 等人于 2014 年](https://arxiv.org/abs/1406.1078)提出。\n",
    "\n",
    "3. `keras.layers.LSTM`，最初由 [Hochreiter 和 Schmidhuber 于 1997 年](https://www.bioinf.jku.at/publications/older/2604.pdf)提出。\n",
    "\n",
    "2015 年初，Keras 首次具有了 LSTM 和 GRU 的可重用开源 Python 实现。\n",
    "\n",
    "下面是一个 `Sequential` 模型的简单示例，该模型可以处理整数序列，将每个整数嵌入 64 维向量中，然后使用 `LSTM` 层处理向量序列。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_1\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "embedding (Embedding)        (None, None, 64)          64000     \n",
      "_________________________________________________________________\n",
      "lstm (LSTM)                  (None, 128)               98816     \n",
      "_________________________________________________________________\n",
      "dense_2 (Dense)              (None, 10)                1290      \n",
      "=================================================================\n",
      "Total params: 164,106\n",
      "Trainable params: 164,106\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = keras.Sequential()\n",
    "# Add an Embedding layer expecting input vocab of size 1000, and\n",
    "# output embedding dimension of size 64.\n",
    "model.add(layers.Embedding(input_dim=1000, output_dim=64))\n",
    "\n",
    "# Add a LSTM layer with 128 internal units.\n",
    "model.add(layers.LSTM(128))\n",
    "\n",
    "# Add a Dense layer with 10 units.\n",
    "model.add(layers.Dense(10))\n",
    "\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "内置 RNN 支持许多实用功能：\n",
    "\n",
    "- 通过 `dropout` 和 `recurrent_dropout` 参数进行循环随机失活\n",
    "- 能够通过 `go_backwards` 参数反向处理输入序列\n",
    "- 通过 `unroll` 参数进行循环展开（这会大幅提升在 CPU 上处理短序列的速度）\n",
    "- …以及更多功能。\n",
    "\n",
    "有关详情，请参阅 [RNN API 文档](https://keras.io/api/layers/recurrent_layers/)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**输出和状态**\n",
    "\n",
    "默认情况下，RNN 层的输出为每个样本包含一个向量。此向量是与最后一个时间步骤相对应的 RNN 单元输出，包含关于整个输入序列的信息。此输出的形状为 `(batch_size, units)`，其中 `units` 对应于传递给层构造函数的 `units` 参数。\n",
    "\n",
    "如果您设置了 `return_sequences=True`，RNN 层还能返回每个样本的整个输出序列（每个样本的每个时间步骤一个向量）。此输出的形状为 `(batch_size, timesteps, units)`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_2\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "embedding_1 (Embedding)      (None, None, 64)          64000     \n",
      "_________________________________________________________________\n",
      "gru (GRU)                    (None, None, 256)         247296    \n",
      "_________________________________________________________________\n",
      "simple_rnn (SimpleRNN)       (None, 128)               49280     \n",
      "_________________________________________________________________\n",
      "dense_3 (Dense)              (None, 10)                1290      \n",
      "=================================================================\n",
      "Total params: 361,866\n",
      "Trainable params: 361,866\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = keras.Sequential()\n",
    "model.add(layers.Embedding(input_dim=1000, output_dim=64))\n",
    "\n",
    "# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)\n",
    "model.add(layers.GRU(256, return_sequences=True))\n",
    "\n",
    "# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)\n",
    "model.add(layers.SimpleRNN(128))\n",
    "\n",
    "model.add(layers.Dense(10))\n",
    "\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此外，RNN 层还可以返回其最终内部状态。返回的状态可用于稍后恢复 RNN 执行，或[初始化另一个 RNN](https://arxiv.org/abs/1409.3215)。此设置常用于编码器-解码器序列到序列模型，其中编码器的最终状态被用作解码器的初始状态。\n",
    "\n",
    "要配置 RNN 层以返回其内部状态，请在创建该层时将 `return_state` 参数设置为 `True`。请注意，`LSTM` 具有两个状态张量，但 `GRU` 只有一个。\n",
    "\n",
    "要配置该层的初始状态，只需额外使用关键字参数 `initial_state` 调用该层。请注意，状态的形状需要匹配该层的单元大小，如下例所示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"model\"\n",
      "__________________________________________________________________________________________________\n",
      "Layer (type)                    Output Shape         Param #     Connected to                     \n",
      "==================================================================================================\n",
      "input_2 (InputLayer)            [(None, None)]       0                                            \n",
      "__________________________________________________________________________________________________\n",
      "input_3 (InputLayer)            [(None, None)]       0                                            \n",
      "__________________________________________________________________________________________________\n",
      "embedding_2 (Embedding)         (None, None, 64)     64000       input_2[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "embedding_3 (Embedding)         (None, None, 64)     128000      input_3[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "encoder (LSTM)                  [(None, 64), (None,  33024       embedding_2[0][0]                \n",
      "__________________________________________________________________________________________________\n",
      "decoder (LSTM)                  (None, 64)           33024       embedding_3[0][0]                \n",
      "                                                                 encoder[0][1]                    \n",
      "                                                                 encoder[0][2]                    \n",
      "__________________________________________________________________________________________________\n",
      "dense_4 (Dense)                 (None, 10)           650         decoder[0][0]                    \n",
      "==================================================================================================\n",
      "Total params: 258,698\n",
      "Trainable params: 258,698\n",
      "Non-trainable params: 0\n",
      "__________________________________________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "encoder_vocab = 1000\n",
    "decoder_vocab = 2000\n",
    "\n",
    "encoder_input = layers.Input(shape=(None,))\n",
    "encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(\n",
    "    encoder_input\n",
    ")\n",
    "\n",
    "# Return states in addition to output\n",
    "output, state_h, state_c = layers.LSTM(64, return_state=True, name=\"encoder\")(\n",
    "    encoder_embedded\n",
    ")\n",
    "encoder_state = [state_h, state_c]\n",
    "\n",
    "decoder_input = layers.Input(shape=(None,))\n",
    "decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(\n",
    "    decoder_input\n",
    ")\n",
    "\n",
    "# Pass the 2 states to a new LSTM layer, as initial state\n",
    "decoder_output = layers.LSTM(64, name=\"decoder\")(\n",
    "    decoder_embedded, initial_state=encoder_state\n",
    ")\n",
    "output = layers.Dense(10)(decoder_output)\n",
    "\n",
    "model = keras.Model([encoder_input, decoder_input], output)\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "除内置 RNN 层外，RNN API 还提供单元级 API。与处理整批输入序列的 RNN 层不同，RNN 单元仅处理单个时间步骤。\n",
    "\n",
    "单元位于 RNN 层的 `for` 循环内。将单元封装在 `keras.layers.RNN` 层内，您会得到一个能够处理序列批次的层，如 `RNN(LSTMCell(10))`。\n",
    "\n",
    "从数学上看，`RNN(LSTMCell(10))` 会产生和 `LSTM(10)` 相同的结果。但实际上，此层在 TF v1.x 中的实现只会创建对应的 RNN 单元并将其封装在 RNN 层内。但是，如果使用内置的 `GRU` 和 `LSTM` 层，您就能够使用 CuDNN，并获得更出色的性能。\n",
    "\n",
    "共有三种内置 RNN 单元，每种单元对应于匹配的 RNN 层。\n",
    "\n",
    "- `keras.layers.SimpleRNNCell` 对应于 `SimpleRNN` 层。\n",
    "\n",
    "- `keras.layers.GRUCell` 对应于 `GRU` 层。\n",
    "\n",
    "- `keras.layers.LSTMCell` 对应于 `LSTM` 层。\n",
    "\n",
    "借助单元抽象和通用 `keras.layers.RNN` 类，您可以为研究轻松实现自定义 RNN 架构。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**无限长序列**\n",
    "\n",
    "在处理非常长的序列（可能无限长）时，您可能需要使用**跨批次有状态性**模式。\n",
    "\n",
    "通常情况下，每次看到新批次时，都会重置 RNN 层的内部状态（即，假定该层看到的每个样本都独立于过去）。该层将仅在处理给定样本时保持状态。\n",
    "\n",
    "但如果您的序列非常长，一种有效做法是将它们拆分成较短的序列，然后将这些较短序列按顺序馈送给 RNN 层，而无需重置该层的状态。如此一来，该层就可以保留有关整个序列的信息，尽管它一次只能看到一个子序列。\n",
    "\n",
    "您可以通过在构造函数中设置 `stateful=True` 来执行上述操作。\n",
    "\n",
    "如果您有一个序列 `s = [t0, t1, ... t1546, t1547]`，可以将其拆分成如下式样：\n",
    "\n",
    "```\n",
    "s1 = [t0, t1, ... t100]\n",
    "s2 = [t101, ... t201]\n",
    "...\n",
    "s16 = [t1501, ... t1547]\n",
    "```\n",
    "\n",
    "然后，您可以通过以下方式处理它：\n",
    "\n",
    "```python\n",
    "lstm_layer = layers.LSTM(64, stateful=True)\n",
    "for s in sub_sequences:\n",
    "  output = lstm_layer(s)\n",
    "```\n",
    "\n",
    "想要清除状态时，您可以使用 `layer.reset_states()`。\n",
    "\n",
    "> 注：在此设置中，假设给定批次中的样本 `i` 是上一个批次中样本 `i` 的延续。也就是说，所有批次应该包含相同的样本数量（批次大小）。例如，如果一个批次包含 `[sequence_A_from_t0_to_t100,  sequence_B_from_t0_to_t100]`，则下一个批次应该包含 `[sequence_A_from_t101_to_t200,  sequence_B_from_t101_to_t200]`。\n",
    "\n",
    "以下是完整示例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "\n",
    "lstm_layer = layers.LSTM(64, stateful=True)\n",
    "output = lstm_layer(paragraph1)\n",
    "output = lstm_layer(paragraph2)\n",
    "output = lstm_layer(paragraph3)\n",
    "\n",
    "# reset_states() will reset the cached state to the original initial_state.\n",
    "# If no initial_state was provided, zero-states will be used by default.\n",
    "lstm_layer.reset_states()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**双向RNN**\n",
    "\n",
    "对于时间序列以外的序列（如文本），如果 RNN 模型不仅能从头到尾处理序列，而且还能反向处理的话，它的性能通常会更好。例如，要预测句子中的下一个单词，通常比较有用的是掌握单词的上下文，而非仅仅掌握该单词前面的单词。\n",
    "\n",
    "Keras 为您提供了一个简单的 API 来构建此类双向 RNN：`keras.layers.Bidirectional` 封装容器。\n",
    "\n",
    "`Bidirectional` 会在后台复制传入的 RNN 层，并翻转新复制的层的 `go_backwards` 字段，这样它就能按相反的顺序处理输入了。\n",
    "\n",
    "默认情况下，`Bidirectional` RNN 的输出将是前向层输出和后向层输出的串联。如果您需要串联等其他合并行为，请更改 `Bidirectional` 封装容器构造函数中的 `merge_mode` 参数。如需详细了解 `Bidirectional`，请查看 [API 文档](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/Bidirectional/)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_3\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "bidirectional (Bidirectional (None, 5, 128)            38400     \n",
      "_________________________________________________________________\n",
      "bidirectional_1 (Bidirection (None, 64)                41216     \n",
      "_________________________________________________________________\n",
      "dense_5 (Dense)              (None, 10)                650       \n",
      "=================================================================\n",
      "Total params: 80,266\n",
      "Trainable params: 80,266\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = keras.Sequential()\n",
    "\n",
    "model.add(\n",
    "    layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))\n",
    ")\n",
    "model.add(layers.Bidirectional(layers.LSTM(32)))\n",
    "model.add(layers.Dense(10))\n",
    "\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**写一个嵌套输入的RNN**\n",
    "\n",
    "实现器可以通过嵌套结构在单个时间步骤内包含更多信息。例如，一个视频帧可以同时包含音频和视频输入。在这种情况下，数据形状可以为：\n",
    "\n",
    "`[batch, timestep, {\"video\": [height, width, channel], \"audio\": [frequency]}]`\n",
    "\n",
    "在另一个示例中，手写数据可以包括笔的当前位置的 x 和 y 坐标，以及压力信息。因此，数据表示可以为：\n",
    "\n",
    "`[batch, timestep, {\"location\": [x, y], \"pressure\": [force]}]`\n",
    "\n",
    "以下代码提供了一个示例，演示了如何构建接受此类结构化输入的自定义 RNN 单元。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "class NestedCell(keras.layers.Layer):\n",
    "    def __init__(self, unit_1, unit_2, unit_3, **kwargs):\n",
    "        self.unit_1 = unit_1\n",
    "        self.unit_2 = unit_2\n",
    "        self.unit_3 = unit_3\n",
    "        self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n",
    "        self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n",
    "        super(NestedCell, self).__init__(**kwargs)\n",
    "\n",
    "    def build(self, input_shapes):\n",
    "        # expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]\n",
    "        i1 = input_shapes[0][1]\n",
    "        i2 = input_shapes[1][1]\n",
    "        i3 = input_shapes[1][2]\n",
    "\n",
    "        self.kernel_1 = self.add_weight(\n",
    "            shape=(i1, self.unit_1), initializer=\"uniform\", name=\"kernel_1\"\n",
    "        )\n",
    "        self.kernel_2_3 = self.add_weight(\n",
    "            shape=(i2, i3, self.unit_2, self.unit_3),\n",
    "            initializer=\"uniform\",\n",
    "            name=\"kernel_2_3\",\n",
    "        )\n",
    "\n",
    "    def call(self, inputs, states):\n",
    "        # inputs should be in [(batch, input_1), (batch, input_2, input_3)]\n",
    "        # state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]\n",
    "        input_1, input_2 = tf.nest.flatten(inputs)\n",
    "        s1, s2 = states\n",
    "\n",
    "        output_1 = tf.matmul(input_1, self.kernel_1)\n",
    "        output_2_3 = tf.einsum(\"bij,ijkl->bkl\", input_2, self.kernel_2_3)\n",
    "        state_1 = s1 + output_1\n",
    "        state_2_3 = s2 + output_2_3\n",
    "\n",
    "        output = (output_1, output_2_3)\n",
    "        new_states = (state_1, state_2_3)\n",
    "\n",
    "        return output, new_states\n",
    "\n",
    "    def get_config(self):\n",
    "        return {\"unit_1\": self.unit_1, \"unit_2\": unit_2, \"unit_3\": self.unit_3}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "让我们构建一个使用 `keras.layers.RNN` 层和刚刚定义的自定义单元的 Keras 模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "unit_1 = 10\n",
    "unit_2 = 20\n",
    "unit_3 = 30\n",
    "\n",
    "i1 = 32\n",
    "i2 = 64\n",
    "i3 = 32\n",
    "batch_size = 64\n",
    "num_batches = 10\n",
    "timestep = 50\n",
    "\n",
    "cell = NestedCell(unit_1, unit_2, unit_3)\n",
    "rnn = keras.layers.RNN(cell)\n",
    "\n",
    "input_1 = keras.Input((None, i1))\n",
    "input_2 = keras.Input((None, i2, i3))\n",
    "\n",
    "outputs = rnn((input_1, input_2))\n",
    "\n",
    "model = keras.models.Model([input_1, input_2], outputs)\n",
    "\n",
    "model.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"accuracy\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "10/10 [==============================] - 4s 247ms/step - loss: 0.8896 - rnn_loss: 0.2859 - rnn_1_loss: 0.6037 - rnn_accuracy: 0.0818 - rnn_1_accuracy: 0.0327\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x24f072361f0>"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#用随机数演示一下\n",
    "input_1_data = np.random.random((batch_size * num_batches, timestep, i1))\n",
    "input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))\n",
    "target_1_data = np.random.random((batch_size * num_batches, unit_1))\n",
    "target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))\n",
    "input_data = [input_1_data, input_2_data]\n",
    "target_data = [target_1_data, target_2_data]\n",
    "\n",
    "model.fit(input_data, target_data, batch_size=batch_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**使用 CuDNN 内核并对比**\n",
    "\n",
    "在 TensorFlow 2.0 中，内置的 LSTM 和 GRU 层已经更新，会在 GPU 可用时默认使用 CuDNN 内核。本次更改后，之前的 `keras.layers.CuDNNLSTM/CuDNNGRU` 层已被弃用，您在构建模型时不再需要担心运行它的硬件了。\n",
    "\n",
    "由于 CuDNN 内核是基于某些假设构建的，这意味着**如果您更改了内置 LSTM 或 GRU 层的默认设置，则该层将无法使用 CuDNN 内核**。例如：\n",
    "\n",
    "- 将 `activation` 函数从 `tanh` 更改为其他。\n",
    "- 将 `recurrent_activation` 函数从 `sigmoid` 更改为其他。\n",
    "- 使用大于零的 `recurrent_dropout`。\n",
    "- 将 `unroll` 设置为 True，这会强制 LSTM/GRU 将内部 `tf.while_loop` 分解成未展开的 `for` 循环。\n",
    "- 将 `use_bias` 设置为 False。\n",
    "- 当输入数据没有严格正确地填充时使用遮盖（如果掩码对应于严格正确的填充数据，则仍可使用 CuDNN。这是最常见的情况）。\n",
    "\n",
    "有关约束的详细列表，请参阅 [GRU](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/LSTM/) 和 [GRU](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/GRU/) 层的文档。\n",
    "\n",
    "让我们构建一个简单的 LSTM 模型来演示性能差异。\n",
    "\n",
    "我们将使用 MNIST 数字的行序列作为输入序列（将每一行像素视为一个时间步骤），并预测数字的标签。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_size = 64\n",
    "# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).\n",
    "# Each input sequence will be of size (28, 28) (height is treated like time).\n",
    "input_dim = 28\n",
    "\n",
    "units = 64\n",
    "output_size = 10  # labels are from 0 to 9\n",
    "\n",
    "# Build the RNN model\n",
    "def build_model(allow_cudnn_kernel=True):\n",
    "    # CuDNN is only available at the layer level, and not at the cell level.\n",
    "    # This means `LSTM(units)` will use the CuDNN kernel,\n",
    "    # while RNN(LSTMCell(units)) will run on non-CuDNN kernel.\n",
    "    if allow_cudnn_kernel:\n",
    "        # The LSTM layer with default options uses CuDNN.\n",
    "        lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))\n",
    "    else:\n",
    "        # Wrapping a LSTMCell in a RNN layer will not use CuDNN.\n",
    "        lstm_layer = keras.layers.RNN(\n",
    "            keras.layers.LSTMCell(units), input_shape=(None, input_dim)\n",
    "        )\n",
    "    model = keras.models.Sequential(\n",
    "        [\n",
    "            lstm_layer,\n",
    "            keras.layers.BatchNormalization(),\n",
    "            keras.layers.Dense(output_size),\n",
    "        ]\n",
    "    )\n",
    "    return model\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "mnist = keras.datasets.mnist\n",
    "\n",
    "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n",
    "x_train, x_test = x_train / 255.0, x_test / 255.0\n",
    "sample, sample_label = x_train[0], y_train[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "938/938 [==============================] - 32s 30ms/step - loss: 1.2970 - accuracy: 0.5817 - val_loss: 0.5301 - val_accuracy: 0.8275\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x24f0754f0d0>"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = build_model(allow_cudnn_kernel=True)\n",
    "\n",
    "model.compile(\n",
    "    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "    optimizer=\"sgd\",\n",
    "    metrics=[\"accuracy\"],\n",
    ")\n",
    "\n",
    "\n",
    "model.fit(\n",
    "    x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在，我们与未使用 CuDNN 内核的模型进行对比："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "938/938 [==============================] - 29s 29ms/step - loss: 0.4453 - accuracy: 0.8616 - val_loss: 0.5090 - val_accuracy: 0.8345\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x24f0281eeb0>"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "noncudnn_model = build_model(allow_cudnn_kernel=False)\n",
    "noncudnn_model.set_weights(model.get_weights())\n",
    "noncudnn_model.compile(\n",
    "    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "    optimizer=\"sgd\",\n",
    "    metrics=[\"accuracy\"],\n",
    ")\n",
    "noncudnn_model.fit(\n",
    "    x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在安装了 NVIDIA GPU 和 CuDNN 的计算机上运行时，使用 CuDNN 构建的模型的训练速度要比使用常规 TensorFlow 内核的模型快得多。\n",
    "\n",
    "启用了 CuDNN 的相同模型也可用来在纯 CPU 环境中运行推断。下面的 `tf.device` 注解只是强制设备放置。如果没有可用的 GPU，则该模型将默认在 CPU 上运行。\n",
    "\n",
    "您再也不必担心运行的硬件了。这是不是很棒？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Predicted result is: [3], target result is: 5\n"
     ]
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAN80lEQVR4nO3df6hcdXrH8c+ncf3DrBpTMYasNhuRWBWbLRqLSl2RrD9QNOqWDVgsBrN/GHChhEr6xyolEuqP0qAsuYu6sWyzLqgYZVkVo6ZFCF5j1JjU1YrdjV6SSozG+KtJnv5xT+Su3vnOzcyZOZP7vF9wmZnzzJnzcLife87Md879OiIEYPL7k6YbANAfhB1IgrADSRB2IAnCDiRxRD83ZpuP/oEeiwiPt7yrI7vtS22/aftt27d281oAesudjrPbniLpd5IWSNou6SVJiyJia2EdjuxAj/XiyD5f0tsR8U5EfCnpV5Ku6uL1APRQN2GfJekPYx5vr5b9EdtLbA/bHu5iWwC61M0HdOOdKnzjND0ihiQNSZzGA03q5si+XdJJYx5/R9L73bUDoFe6CftLkk61/V3bR0r6kaR19bQFoG4dn8ZHxD7bSyU9JWmKpAci4o3aOgNQq46H3jraGO/ZgZ7ryZdqABw+CDuQBGEHkiDsQBKEHUiCsANJEHYgCcIOJEHYgSQIO5AEYQeSIOxAEoQdSIKwA0kQdiAJwg4kQdiBJAg7kARhB5Ig7EAShB1IgrADSRB2IAnCDiRB2IEkCDuQBGEHkiDsQBKEHUii4ymbcXiYMmVKsX7sscf2dPtLly5tWTvqqKOK686dO7dYv/nmm4v1u+66q2Vt0aJFxXU///zzYn3lypXF+u23316sN6GrsNt+V9IeSfsl7YuIs+toCkD96jiyXxQRH9TwOgB6iPfsQBLdhj0kPW37ZdtLxnuC7SW2h20Pd7ktAF3o9jT+/Ih43/YJkp6x/V8RsWHsEyJiSNKQJNmOLrcHoENdHdkj4v3qdqekxyTNr6MpAPXrOOy2p9o++uB9ST+QtKWuxgDUq5vT+BmSHrN98HX+PSJ+W0tXk8zJJ59crB955JHF+nnnnVesX3DBBS1r06ZNK6577bXXFutN2r59e7G+atWqYn3hwoUta3v27Cmu++qrrxbrL7zwQrE+iDoOe0S8I+kvauwFQA8x9AYkQdiBJAg7kARhB5Ig7EASjujfl9om6zfo5s2bV6yvX7++WO/1ZaaD6sCBA8X6jTfeWKx/8sknHW97ZGSkWP/www+L9TfffLPjbfdaRHi85RzZgSQIO5AEYQeSIOxAEoQdSIKwA0kQdiAJxtlrMH369GJ948aNxfqcOXPqbKdW7XrfvXt3sX7RRRe1rH355ZfFdbN+/6BbjLMDyRF2IAnCDiRB2IEkCDuQBGEHkiDsQBJM2VyDXbt2FevLli0r1q+44opi/ZVXXinW2/1L5ZLNmzcX6wsWLCjW9+7dW6yfccYZLWu33HJLcV3UiyM7kARhB5Ig7EAShB1IgrADSRB2IAnCDiTB9ewD4JhjjinW200vvHr16pa1xYsXF9e9/vrri/W1a9cW6xg8HV/PbvsB2zttbxmzbLrtZ2y/Vd0eV2ezAOo3kdP4X0i69GvLbpX0bEScKunZ6jGAAdY27BGxQdLXvw96laQ11f01kq6uty0Adev0u/EzImJEkiJixPYJrZ5oe4mkJR1uB0BNen4hTEQMSRqS+IAOaFKnQ287bM+UpOp2Z30tAeiFTsO+TtIN1f0bJD1eTzsAeqXtabzttZK+L+l429sl/VTSSkm/tr1Y0u8l/bCXTU52H3/8cVfrf/TRRx2ve9NNNxXrDz/8cLHebo51DI62YY+IRS1KF9fcC4Ae4uuyQBKEHUiCsANJEHYgCcIOJMElrpPA1KlTW9aeeOKJ4roXXnhhsX7ZZZcV608//XSxjv5jymYgOcIOJEHYgSQIO5AEYQeSIOxAEoQdSIJx9knulFNOKdY3bdpUrO/evbtYf+6554r14eHhlrX77ruvuG4/fzcnE8bZgeQIO5AEYQeSIOxAEoQdSIKwA0kQdiAJxtmTW7hwYbH+4IMPFutHH310x9tevnx5sf7QQw8V6yMjIx1vezJjnB1IjrADSRB2IAnCDiRB2IEkCDuQBGEHkmCcHUVnnnlmsX7PPfcU6xdf3Plkv6tXry7WV6xYUay/9957HW/7cNbxOLvtB2zvtL1lzLLbbL9ne3P1c3mdzQKo30RO438h6dJxlv9LRMyrfn5Tb1sA6tY27BGxQdKuPvQCoIe6+YBuqe3XqtP841o9yfYS28O2W/8zMgA912nYfybpFEnzJI1IurvVEyNiKCLOjoizO9wWgBp0FPaI2BER+yPigKSfS5pfb1sA6tZR2G3PHPNwoaQtrZ4LYDC0HWe3vVbS9yUdL2mHpJ9Wj+dJCknvSvpxRLS9uJhx9sln2rRpxfqVV17ZstbuWnl73OHir6xfv75YX7BgQbE+WbUaZz9iAisuGmfx/V13BKCv+LoskARhB5Ig7EAShB1IgrADSXCJKxrzxRdfFOtHHFEeLNq3b1+xfskll7SsPf/888V1D2f8K2kgOcIOJEHYgSQIO5AEYQeSIOxAEoQdSKLtVW/I7ayzzirWr7vuumL9nHPOaVlrN47eztatW4v1DRs2dPX6kw1HdiAJwg4kQdiBJAg7kARhB5Ig7EAShB1IgnH2SW7u3LnF+tKlS4v1a665plg/8cQTD7mnidq/f3+xPjJS/u/lBw4cqLOdwx5HdiAJwg4kQdiBJAg7kARhB5Ig7EAShB1IgnH2w0C7sexFi8abaHdUu3H02bNnd9JSLYaHh4v1FStWFOvr1q2rs51Jr+2R3fZJtp+zvc32G7ZvqZZPt/2M7beq2+N63y6ATk3kNH6fpL+PiD+X9FeSbrZ9uqRbJT0bEadKerZ6DGBAtQ17RIxExKbq/h5J2yTNknSVpDXV09ZIurpHPQKowSG9Z7c9W9L3JG2UNCMiRqTRPwi2T2ixzhJJS7rsE0CXJhx229+W9Iikn0TEx/a4c8d9Q0QMSRqqXoOJHYGGTGjozfa3NBr0X0bEo9XiHbZnVvWZknb2pkUAdWh7ZPfoIfx+Sdsi4p4xpXWSbpC0srp9vCcdTgIzZswo1k8//fRi/d577y3WTzvttEPuqS4bN24s1u+8886WtccfL//KcIlqvSZyGn++pL+V9LrtzdWy5RoN+a9tL5b0e0k/7EmHAGrRNuwR8Z+SWr1Bv7jedgD0Cl+XBZIg7EAShB1IgrADSRB2IAkucZ2g6dOnt6ytXr26uO68efOK9Tlz5nTSUi1efPHFYv3uu+8u1p966qli/bPPPjvkntAbHNmBJAg7kARhB5Ig7EAShB1IgrADSRB2IIk04+znnntusb5s2bJiff78+S1rs2bN6qinunz66acta6tWrSque8cddxTre/fu7agnDB6O7EAShB1IgrADSRB2IAnCDiRB2IEkCDuQRJpx9oULF3ZV78bWrVuL9SeffLJY37dvX7FeuuZ89+7dxXWRB0d2IAnCDiRB2IEkCDuQBGEHkiDsQBKEHUjCEVF+gn2SpIcknSjpgKShiPhX27dJuknS/1ZPXR4Rv2nzWuWNAehaRIw76/JEwj5T0syI2GT7aEkvS7pa0t9I+iQi7ppoE4Qd6L1WYZ/I/Owjkkaq+3tsb5PU7L9mAXDIDuk9u+3Zkr4naWO1aKnt12w/YPu4FusssT1se7i7VgF0o+1p/FdPtL8t6QVJKyLiUdszJH0gKST9k0ZP9W9s8xqcxgM91vF7dkmy/S1JT0p6KiLuGac+W9KTEXFmm9ch7ECPtQp729N425Z0v6RtY4NefXB30EJJW7ptEkDvTOTT+Ask/Yek1zU69CZJyyUtkjRPo6fx70r6cfVhXum1OLIDPdbVaXxdCDvQex2fxgOYHAg7kARhB5Ig7EAShB1IgrADSRB2IAnCDiRB2IEkCDuQBGEHkiDsQBKEHUiCsANJ9HvK5g8k/c+Yx8dXywbRoPY2qH1J9NapOnv7s1aFvl7P/o2N28MRcXZjDRQMam+D2pdEb53qV2+cxgNJEHYgiabDPtTw9ksGtbdB7Uuit071pbdG37MD6J+mj+wA+oSwA0k0Enbbl9p+0/bbtm9toodWbL9r+3Xbm5uen66aQ2+n7S1jlk23/Yztt6rbcefYa6i322y/V+27zbYvb6i3k2w/Z3ub7Tds31Itb3TfFfrqy37r+3t221Mk/U7SAknbJb0kaVFEbO1rIy3YflfS2RHR+BcwbP+1pE8kPXRwai3b/yxpV0SsrP5QHhcR/zAgvd2mQ5zGu0e9tZpm/O/U4L6rc/rzTjRxZJ8v6e2IeCcivpT0K0lXNdDHwIuIDZJ2fW3xVZLWVPfXaPSXpe9a9DYQImIkIjZV9/dIOjjNeKP7rtBXXzQR9lmS/jDm8XYN1nzvIelp2y/bXtJ0M+OYcXCarer2hIb7+bq203j309emGR+YfdfJ9OfdaiLs401NM0jjf+dHxF9KukzSzdXpKibmZ5JO0egcgCOS7m6ymWqa8Uck/SQiPm6yl7HG6asv+62JsG+XdNKYx9+R9H4DfYwrIt6vbndKekyjbzsGyY6DM+hWtzsb7ucrEbEjIvZHxAFJP1eD+66aZvwRSb+MiEerxY3vu/H66td+ayLsL0k61fZ3bR8p6UeS1jXQxzfYnlp9cCLbUyX9QIM3FfU6STdU92+Q9HiDvfyRQZnGu9U042p43zU+/XlE9P1H0uUa/UT+vyX9YxM9tOhrjqRXq583mu5N0lqNntb9n0bPiBZL+lNJz0p6q7qdPkC9/ZtGp/Z+TaPBmtlQbxdo9K3ha5I2Vz+XN73vCn31Zb/xdVkgCb5BByRB2IEkCDuQBGEHkiDsQBKEHUiCsANJ/D+f1mbt6t55/AAAAABJRU5ErkJggg==",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "\n",
    "with tf.device(\"CPU:0\"):\n",
    "    cpu_model = build_model(allow_cudnn_kernel=True)\n",
    "    cpu_model.set_weights(model.get_weights())\n",
    "    result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)\n",
    "    print(\n",
    "        \"Predicted result is: %s, target result is: %s\" % (result.numpy(), sample_label)\n",
    "    )\n",
    "    plt.imshow(sample, cmap=plt.get_cmap(\"gray\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
