{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 训练和评估 Keras\n",
    "本指南涵盖了 tensorflow2 中在以下两种情况下的训练、评估和预测（推理）模型： \n",
    "1. 使用内置 API 进行训练和验证时（例如 model.fit()、model.evaluate()、model.predict()）。在 \"Using built-in training & evaluation loops\" 部分中对此进行了介绍。 \n",
    "2. 使用 eager 执行和 GradientTape 对象从头开始编写自定义循环时。在 \"Writing your own training & evaluation loops from scratch\" 部分中对此进行了介绍。 \n",
    "\n",
    "通常，无论你是使用内置循环还是编写自己的循环，模型训练和评估都在每种 Keras 模型中严格按照相同的方式进行工作—顺序模型，使用 函数式 API 构建的模型以及从头开始编写的模型模型子类化。 **本指南不涉及分布式培训。**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用内置的培训和评估循环\n",
    "将数据传递到模型的内置训练循环时，应该使用 numpy 数组（如果数据很小并且适合存储在内存中）或tf.data Dataset对象。在接下来的几段中，我们将MNIST数据集用作Numpy数组，以演示如何使用优化程序，损失和指标。\n",
    "### API概述：第一个端到端示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "将训练数据填充入模型\n",
      "Train on 50000 samples, validate on 10000 samples\n",
      "Epoch 1/3\n",
      "50000/50000 [==============================] - 7s 139us/sample - loss: 0.3344 - sparse_categorical_accuracy: 0.9052 - val_loss: 0.1838 - val_sparse_categorical_accuracy: 0.9487\n",
      "Epoch 2/3\n",
      "50000/50000 [==============================] - 5s 108us/sample - loss: 0.1573 - sparse_categorical_accuracy: 0.9531 - val_loss: 0.1480 - val_sparse_categorical_accuracy: 0.9570\n",
      "Epoch 3/3\n",
      "50000/50000 [==============================] - 6s 111us/sample - loss: 0.1161 - sparse_categorical_accuracy: 0.9653 - val_loss: 0.1265 - val_sparse_categorical_accuracy: 0.9636\n",
      "模型训练历史： {'loss': [0.334394164686203, 0.15726798023223876, 0.11613759720087051], 'sparse_categorical_accuracy': [0.9052, 0.95312, 0.96534], 'val_loss': [0.18379049986600876, 0.1479856563270092, 0.12645824457108976], 'val_sparse_categorical_accuracy': [0.9487, 0.957, 0.9636]}\n",
      "使用测试集评估模型\n",
      "10000/10000 [==============================] - 0s 28us/sample - loss: 19.4517 - sparse_categorical_accuracy: 0.9629\n",
      "测试集损失值，测试集精度 [19.45169223022461, 0.9629]\n",
      "对3个样本做出预测\n",
      "预测值维度： (3, 10)\n"
     ]
    }
   ],
   "source": [
    "from tensorflow import keras\n",
    "from tensorflow.keras import layers\n",
    "\n",
    "inputs = keras.Input(shape=(784,), name='digits')\n",
    "x = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "x = layers.Dense(64, activation='relu', name='dense_2')(x)\n",
    "outputs = layers.Dense(10, name='preditions')(x)\n",
    "\n",
    "model = keras.Model(inputs = inputs, outputs = outputs)\n",
    "\n",
    "(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n",
    "\n",
    "# 加工数据\n",
    "x_train = x_train.reshape(60000, 784).astype('float32') / 255\n",
    "x_test = x_test.reshape(10000, 784).astype('float32')\n",
    "y_train = y_train.astype('float32')\n",
    "y_test = y_test.astype('float32')\n",
    "\n",
    "# 截取后 10000 个样本作为验证集\n",
    "x_val = x_train[-10000:]\n",
    "y_val = y_train[-10000:]\n",
    "x_train = x_train[:-10000]\n",
    "y_train = y_train[:-10000]\n",
    "\n",
    "model.compile(optimizer=keras.optimizers.RMSprop(),\n",
    "             loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "             metrics=['sparse_categorical_accuracy'])\n",
    "\n",
    "print('将训练数据填充入模型')\n",
    "history = model.fit(x_train, y_train, batch_size=64, epochs=3, validation_data=(x_val, y_val))\n",
    "\n",
    "# 返回的“历史”对象保留训练期间的损失值和度量值的记录\n",
    "print('模型训练历史：' , history.history)\n",
    "\n",
    "# 评估模型\n",
    "print('使用测试集评估模型')\n",
    "result = model.evaluate(x_test, y_test, batch_size=128)\n",
    "print('测试集损失值，测试集精度', result)\n",
    "\n",
    "# 模型预测\n",
    "print('对3个样本做出预测')\n",
    "preditions = model.predict(x_test[:3])\n",
    "print('预测值维度：', preditions.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 指定损失函数、指标、优化器\n",
    "要训练合适的模型，你需要指定损失函数，优化器以及可选的一些要监控的指标"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),\n",
    "              loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "              metrics=[keras.metrics.sparse_categorical_accuracy])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "metrics 参数应为列表——你的模型可以具有任意数量的度量。 \n",
    "\n",
    "如果模型有多个输出，则可以为每个输出指定不同的损失和指标，并且可以调制每个输出对模型总损失的贡献。你可以在 \"Passing data to multi-input, multi-output models\" 部分中找到关于此的更多详细信息。\n",
    "\n",
    "注意，如果你对默认设置感到满意，那么在很多情况下，可以通过字符串标识符将优化器，损失和指标指定为快捷方式："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 为了以后的重用，我们可以函数中放入模型定义和编译步骤。将在本指南的不同示例中多次调用它们。\n",
    "def get_uncompiled_model():\n",
    "  inputs = keras.Input(shape=(784,), name='digits')\n",
    "  x = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "  x = layers.Dense(64, activation='relu', name='dense_2')(x)\n",
    "  outputs = layers.Dense(10, name='predictions')(x)\n",
    "  model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "  return model\n",
    "\n",
    "def get_compiled_model():\n",
    "  model = get_uncompiled_model()\n",
    "  model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),\n",
    "                loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "                metrics=['sparse_categorical_accuracy'])\n",
    "  return model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**提供许多内置的优化器，损失和指标**：\n",
    "通常，你不必从头开始创建自己的损失，指标或优化程序，因为所需的可能已经是 Keras API 的一部分：\n",
    "优化器：\n",
    "1. SGD()（有或没有动量）\n",
    "2. RMSprop()\n",
    "3. Adam()……\n",
    "\n",
    "损失函数：\n",
    "1. MeanSquaredError()\n",
    "2. KLDivergence()\n",
    "3. CosineSimilarity()……\n",
    "\n",
    "指标：\n",
    "1. AUC()\n",
    "2. Precision()\n",
    "3. Recall()……\n",
    "\n",
    "#### 自定义损失函数\n",
    "\n",
    "第一个示例创建一个接受输入y_true和y_pred的函数。以下示例显示了损失函数，该函数计算实际数据和预测之间的平均绝对误差:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def basic_loss_function(y_true, y_pred):\n",
    "    return tf.math.reduce_mean(tf.abs(y_true - y_pred))\n",
    "\n",
    "model.compile(optimizer=keras.optimizers.Adam(),\n",
    "              loss=basic_loss_function)\n",
    "\n",
    "model.fit(x_train, y_train, batch_size=64, epochs=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如果您需要一个损失函数，该函数需要使用 y_true 和 y_pred 旁边的参数，则可以对 tf.keras.losses.Loss 类进行子类化，并实现以下两种方法： \n",
    "1. __init __（self）——在调用损失函数期间接受的参数 \n",
    "2. call（self，y_true，y_pred）——使用目标（y_true）和模型预测（y_pred）计算模型的损失 \n",
    "\n",
    "计算损失时，可以在call（）期间使用传递给__init __（）的参数。 \n",
    "\n",
    "下面的示例显示了如何实现计算BinaryCrossEntropy损失的WeightedCrossEntropy损失函数，其中某个类或整个函数的损失可以通过标量进行修改。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class WeightedBinaryCrossEntropy(keras.losses.Loss):\n",
    "    '''\n",
    "    参数：\n",
    "        pos_weight：影响损失函数的正标签的标量。 \n",
    "        weight：影响整个损失功能的标量。\n",
    "        from_logits：是根据对数还是概率来计算损失。 \n",
    "        reduction：tf.keras.losses.reduction的类型，适用于损失。 \n",
    "        name：损失函数的名称。\n",
    "    '''\n",
    "    def __init__(self, pos_weight, weight, from_logits=False,\n",
    "                 reduction=keras.losses.Reduction.AUTO,\n",
    "                 name='weighted_binary_crossentropy'):\n",
    "        super().__init__(reduction=reduction, name=name)\n",
    "        self.pos_weight = pos_weight\n",
    "        self.weight = weight\n",
    "        self.from_logits = from_logits\n",
    "\n",
    "    def call(self, y_true, y_pred):\n",
    "        ce = tf.losses.binary_crossentropy(\n",
    "            y_true, y_pred, from_logits=self.from_logits)[:,None]\n",
    "        ce = self.weight * (ce*(1-y_true) + self.pos_weight*ce*(y_true))\n",
    "        return ce\n",
    "    \n",
    "# 使用自定义损失函数\n",
    "one_hot_y_train = tf.one_hot(y_train.astype(np.int32), depth=10)\n",
    "\n",
    "model = get_uncompiled_model()\n",
    "\n",
    "model.compile(\n",
    "    optimizer=keras.optimizers.Adam(),\n",
    "    loss=WeightedBinaryCrossEntropy(\n",
    "        pos_weight=0.5, weight = 2, from_logits=True)\n",
    ")\n",
    "model.fit(x_train, one_hot_y_train, batch_size=64, epochs=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 自定义指标\n",
    "如果需要不属于 API 的指标，则可以通过对 Metric 类进行子类化来轻松创建自定义指标。你将需要实现4种方法： \n",
    "1. __init __（self），你将在其中创建度量标准的状态变量。 \n",
    "2. update_state（self，y_true，y_pred，sample_weight = None），它使用目标 y_true 和模型预测y_pred 来更新状态变量。\n",
    "3. result（self），它使用状态变量来计算最终结果。 \n",
    "4. reset_states（self），它重新初始化度量标准的状态。 \n",
    "\n",
    "状态更新和结果计算一般保持分开（分别在update_state（）和result（）中），因为在某些情况下，结果计算可能会非常昂贵，并且只能定期执行。 \n",
    "\n",
    "这是一个简单的示例，显示了如何实现CategoricalTruePositives指标，该指标计算了正确归类为给定类的样本数量："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CategoricalTruePositives(keras.metrics.Metric):\n",
    "\n",
    "    def __init__(self, name='categorical_true_positives', **kwargs):\n",
    "      super(CategoricalTruePositives, self).__init__(name=name, **kwargs)\n",
    "      self.true_positives = self.add_weight(name='tp', initializer='zeros')\n",
    "\n",
    "    def update_state(self, y_true, y_pred, sample_weight=None):\n",
    "      y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))\n",
    "      values = tf.cast(y_true, 'int32') == tf.cast(y_pred, 'int32')\n",
    "      values = tf.cast(values, 'float32')\n",
    "      if sample_weight is not None:\n",
    "        sample_weight = tf.cast(sample_weight, 'float32')\n",
    "        values = tf.multiply(values, sample_weight)\n",
    "      self.true_positives.assign_add(tf.reduce_sum(values))\n",
    "\n",
    "    def result(self):\n",
    "      return self.true_positives\n",
    "\n",
    "    def reset_states(self):\n",
    "      self.true_positives.assign(0.)\n",
    "    \n",
    "model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),\n",
    "              loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "              metrics=[CategoricalTruePositives()])\n",
    "model.fit(x_train, y_train,\n",
    "          batch_size=64,\n",
    "          epochs=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 处理不符合标准签名的损失和指标\n",
    "可以从y_true和y_pred中计算出绝大多数损失和指标，其中 y_pred 是模型的输出。但不是所有的。\n",
    "\n",
    "例如，正则化损失可能仅需要激活层（在这种情况下没有目标），并且此激活可能不是模型输出。 在这种情况下，你可以从自定义图层的调用方法内部调用 self.add_loss（loss_value）。\n",
    "\n",
    "这是一个添加活动正则化的简单示例（请注意，活动正则化内置在所有Keras层中-该层仅是为了提供一个具体示例）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ActivityRegularizationLayer(layers.Layer):\n",
    "\n",
    "  def call(self, inputs):\n",
    "    self.add_loss(tf.reduce_sum(inputs) * 0.1)\n",
    "    return inputs\n",
    "\n",
    "inputs = keras.Input(shape=(784,), name='digits')\n",
    "x = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "\n",
    "x = ActivityRegularizationLayer()(x)\n",
    "\n",
    "x = layers.Dense(64, activation='relu', name='dense_2')(x)\n",
    "outputs = layers.Dense(10, name='predictions')(x)\n",
    "\n",
    "model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),\n",
    "              loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True))\n",
    "\n",
    "model.fit(x_train, y_train,\n",
    "          batch_size=64,\n",
    "          epochs=1)\n",
    "\n",
    "# 可以对记录指标值执行相同的操作：\n",
    "class MetricLoggingLayer(layers.Layer):\n",
    "\n",
    "  def call(self, inputs):\n",
    "    self.add_metric(keras.backend.std(inputs),\n",
    "                    name='std_of_activation',\n",
    "                    aggregation='mean')\n",
    "    return inputs  # Pass-through layer.\n",
    "\n",
    "\n",
    "inputs = keras.Input(shape=(784,), name='digits')\n",
    "x = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "\n",
    "x = MetricLoggingLayer()(x)\n",
    "\n",
    "x = layers.Dense(64, activation='relu', name='dense_2')(x)\n",
    "outputs = layers.Dense(10, name='predictions')(x)\n",
    "\n",
    "model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),\n",
    "              loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True))\n",
    "model.fit(x_train, y_train,\n",
    "          batch_size=64,\n",
    "          epochs=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在函数式 API中，你还可以调用 model.add_loss（loss_tensor）或 model.add_metric（metric_tensor，名称，集合）。 这是一个简单的示例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "inputs = keras.Input(shape=(784,), name='digits')\n",
    "x1 = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "x2 = layers.Dense(64, activation='relu', name='dense_2')(x1)\n",
    "outputs = layers.Dense(10, name='predictions')(x2)\n",
    "model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "\n",
    "model.add_loss(tf.reduce_sum(x1) * 0.1)\n",
    "\n",
    "model.add_metric(keras.backend.std(x1),\n",
    "                 name='std_of_activation',\n",
    "                 aggregation='mean')\n",
    "\n",
    "model.compile(optimizer=keras.optimizers.RMSprop(1e-3),\n",
    "              loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True))\n",
    "model.fit(x_train, y_train,\n",
    "          batch_size=64,\n",
    "          epochs=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 自动区分验证保留集\n",
    "在第一个端到端示例中，我们使用了 validation_data 参数将 numpy 数组（x_val，y_val）的元组传递给模型，以在每个时期结束时评估验证损失和验证指标。\n",
    "\n",
    "还有另一个选择：参数 validate_spli t允许自动保留部分训练数据以供验证。参数值表示要保留用于验证的数据部分，因此应将其设置为大于0且小于1的数字。例如，validation_split = 0.2表示 \"使用20％的数据进行验证\"， 验证的计算方式是在进行任何改组之前，通过 fit 调用接收的数组的最后 x％ 采样。\n",
    "\n",
    "在训练 numpy 数据时，你可以只使用 validation_split。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = get_compiled_model()\n",
    "model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1, steps_per_epoch=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### tf.data 数据集的培训和评估\n",
    " tf.data API 是 tensorflow2 中的一组实用程序，用于以快速且可扩展的方式加载和预处理数据。 有关创建数据集的完整指南，请参阅 [tf.data 文档](https://tensorflow.google.cn/guide/data)。 您可以将 Dataset 实例直接传递给 fit()，valuate() 和predict() 方法："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train for 782 steps\n",
      "Epoch 1/3\n",
      "782/782 [==============================] - 7s 9ms/step - loss: 0.3435 - sparse_categorical_accuracy: 0.9036\n",
      "Epoch 2/3\n",
      "782/782 [==============================] - 5s 6ms/step - loss: 0.1596 - sparse_categorical_accuracy: 0.9534\n",
      "Epoch 3/3\n",
      "782/782 [==============================] - 4s 5ms/step - loss: 0.1160 - sparse_categorical_accuracy: 0.9652\n",
      "\n",
      "# 评估\n",
      "157/157 [==============================] - 1s 6ms/step - loss: 19.7803 - sparse_categorical_accuracy: 0.9632\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'loss': 19.78027386597593, 'sparse_categorical_accuracy': 0.9632}"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = get_compiled_model()\n",
    "\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)\n",
    "\n",
    "test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))\n",
    "test_dataset = test_dataset.batch(64)\n",
    "\n",
    "model.fit(train_dataset, epochs=3)\n",
    "\n",
    "print('\\n# 评估')\n",
    "result = model.evaluate(test_dataset)\n",
    "dict(zip(model.metrics_names, result))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意：数据集会在每个时期结束时重置，因此可以在下一个时期重复使用。 \n",
    "\n",
    "如果只想对来自此数据集的特定批次进行训练，则可以传递 steps_per_epoch 参数，该参数指定在继续下一个时期之前，该模型应使用该数据集运行多少训练步骤。 如果执行此操作，则不会在每个时期结束时重置数据集，而是继续绘制下一批。数据集最终将用完数据（除非它是无限循环的数据集）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train for 100 steps\n",
      "Epoch 1/3\n",
      "100/100 [==============================] - 1s 14ms/step - loss: 0.8230 - sparse_categorical_accuracy: 0.7922\n",
      "Epoch 2/3\n",
      "100/100 [==============================] - 1s 7ms/step - loss: 0.3325 - sparse_categorical_accuracy: 0.9067\n",
      "Epoch 3/3\n",
      "100/100 [==============================] - 1s 8ms/step - loss: 0.2480 - sparse_categorical_accuracy: 0.9308\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x1c254c1aec8>"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = get_compiled_model()\n",
    "\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)\n",
    "\n",
    "model.fit(train_dataset.take(100), epochs=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 使用验证集 dataset\n",
    "你可以适当地传递数据集实例作为 validation_data 参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = get_compiled_model()\n",
    "\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)\n",
    "\n",
    "val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))\n",
    "val_dataset = val_dataset.batch(64)\n",
    "\n",
    "model.fit(train_dataset, epochs=3, validation_data=val_dataset)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在每个时期结束时，模型将遍历验证数据集并计算验证损失和验证指标。 \n",
    "\n",
    "如果你只想对该数据集中的特定批次运行验证，则可以传递 validation_steps 参数，该参数指定在中断验证并进入下一个时期之前，模型应使用验证数据集运行多少验证步骤："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = get_compiled_model()\n",
    "\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)\n",
    "\n",
    "val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))\n",
    "val_dataset = val_dataset.batch(64)\n",
    "\n",
    "model.fit(train_dataset, epochs=3,\n",
    "          validation_data=val_dataset, validation_steps=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意：验证数据集将在每次使用后重置（因此你将始终在同一时期之间评估相同的样本）。 \n",
    "\n",
    "从 Dataset  对象进行训练时，不支持参数 validation_split（从训练数据生成保留集），因为此功能需要索引数据集样本的能力，而这通常是数据集 API 无法实现的。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 支持其他输入格式\n",
    "除了 numpy 数组和 tensorflow 数据集，还可以使用 pandas 数据框或从生成批处理的 python 生成器中训练 Keras 模型。 通常，如果你的数据很小并且适合内存，则建议你使用 numpy 输入数据，否则建议使用数据集。\n",
    "\n",
    "### 使用样本加权和类别加权\n",
    "除了输入数据和目标数据，使用 fit() 方法 时还可以将样本权重或类权重传递给模型：\n",
    "1. 从 numpy 数据进行训练时：通过 sample_weight 和 class_weight 参数。\n",
    "2. 从数据集训练时：通过使数据集返回一个元组（input_batch，target_batch，sample_weight_batch）。\n",
    "\n",
    "\"样本权重\" 数组是一个数字数组，用于指定批次中每个样本在计算总损失时应具有的权重。它通常用于不平衡的分类问题中（这种想法是为很少见的类赋予更多的权重）。当所使用的权重为1和0时，该数组可用作损失函数的掩码（完全丢弃某些样本对总损失的贡献）。 \n",
    "\n",
    "\"类别权重\" 是同一概念的一个更具体的实例：它将类别索引映射到应该用于属于该类别的样本的样本权重。例如，如果在数据中类 \" 0\" 的表示量少于类 \" 1\" 的两倍，则可以使用class_weight = {0：1.，1：0.5}。 这是一个 numpy 示例，其中我们使用类权重或样本权重来更加重视5类（在MNIST数据集中的数字“ 5”）的正确分类："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "使用类权重进行拟合\n",
      "WARNING:tensorflow:sample_weight modes were coerced from\n",
      "  ...\n",
      "    to  \n",
      "  ['...']\n",
      "Train on 50000 samples\n",
      "Epoch 1/4\n",
      "50000/50000 [==============================] - 5s 94us/sample - loss: 0.0818 - sparse_categorical_accuracy: 0.9767\n",
      "Epoch 2/4\n",
      "50000/50000 [==============================] - 4s 88us/sample - loss: 0.0704 - sparse_categorical_accuracy: 0.9802\n",
      "Epoch 3/4\n",
      "50000/50000 [==============================] - 4s 87us/sample - loss: 0.0606 - sparse_categorical_accuracy: 0.9828\n",
      "Epoch 4/4\n",
      "50000/50000 [==============================] - 5s 101us/sample - loss: 0.0526 - sparse_categorical_accuracy: 0.9851\n",
      "使用样本权重进行拟合\n",
      "WARNING:tensorflow:sample_weight modes were coerced from\n",
      "  ...\n",
      "    to  \n",
      "  ['...']\n",
      "Train on 50000 samples\n",
      "Epoch 1/4\n",
      "50000/50000 [==============================] - 6s 111us/sample - loss: 0.3647 - sparse_categorical_accuracy: 0.9021\n",
      "Epoch 2/4\n",
      "50000/50000 [==============================] - 5s 94us/sample - loss: 0.1711 - sparse_categorical_accuracy: 0.9513\n",
      "Epoch 3/4\n",
      "50000/50000 [==============================] - 5s 94us/sample - loss: 0.1261 - sparse_categorical_accuracy: 0.9647\n",
      "Epoch 4/4\n",
      "50000/50000 [==============================] - 5s 92us/sample - loss: 0.1002 - sparse_categorical_accuracy: 0.9721\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x1c200ff7788>"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "class_weight = {0: 1., 1: 1., 2: 1., 3: 1., 4: 1.,\n",
    "                # 为类别 5 设置权重 2。使此类更重要\n",
    "                5: 2.,\n",
    "                6: 1., 7: 1., 8: 1., 9: 1.}\n",
    "print('使用类权重进行拟合')\n",
    "model.fit(x_train, y_train,\n",
    "          class_weight=class_weight,\n",
    "          batch_size=64,\n",
    "          epochs=4)\n",
    "\n",
    "sample_weight = np.ones(shape=(len(y_train),))\n",
    "sample_weight[y_train == 5] = 2.\n",
    "print('使用样本权重进行拟合')\n",
    "\n",
    "model = get_compiled_model()\n",
    "model.fit(x_train, y_train,\n",
    "          sample_weight=sample_weight,\n",
    "          batch_size=64,\n",
    "          epochs=4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 将数据传递到多输入，多输出模型\n",
    "考虑以下模型，该模型具有形状为（32、32、3）（即（高度，宽度，通道））的图像输入和形状为（None，10）（即（时间步长，特征））的时间序列输入。我们的模型将具有根据这些输入的组合计算出的两个输出：“得分”（形状（1，））和五类（形状（5，））的概率分布："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "绘制图形\n",
      "Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.\n"
     ]
    }
   ],
   "source": [
    "from tensorflow import keras\n",
    "from tensorflow.keras import layers\n",
    "\n",
    "image_input = keras.Input(shape=(32, 32, 3), name='img_input')\n",
    "timeseries_input = keras.Input(shape=(None, 10), name='ts_input')\n",
    "\n",
    "x1 = layers.Conv2D(3, 3)(image_input)\n",
    "x1 = layers.GlobalMaxPooling2D()(x1)\n",
    "\n",
    "x2 = layers.Conv1D(3, 3)(timeseries_input)\n",
    "x2 = layers.GlobalMaxPooling1D()(x2)\n",
    "\n",
    "x = layers.concatenate([x1, x2])\n",
    "\n",
    "score_output = layers.Dense(1, name='score_output')(x)\n",
    "class_output = layers.Dense(5, name='class_output')(x)\n",
    "\n",
    "model = keras.Model(inputs=[image_input, timeseries_input],\n",
    "                    outputs=[score_output, class_output])\n",
    "\n",
    "print('绘制图形')\n",
    "keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)\n",
    "\n",
    "# 对于多输入输出模型，你可以指定不同的损失函数、指标、类权重、样本权重\n",
    "'''\n",
    "model.compile(\n",
    "    optimizer=keras.optimizers.RMSprop(1e-3),\n",
    "    loss=[keras.losses.MeanSquaredError(),\n",
    "          keras.losses.CategoricalCrossentropy(from_logits=True)],\n",
    "    metrics=[[keras.metrics.MeanAbsolutePercentageError(),\n",
    "              keras.metrics.MeanAbsoluteError()],\n",
    "             [keras.metrics.CategoricalAccuracy()]])\n",
    "             \n",
    "# 或者可以这样表示：\n",
    "model.compile(\n",
    "    optimizer=keras.optimizers.RMSprop(1e-3),\n",
    "    loss={'score_output': keras.losses.MeanSquaredError(),\n",
    "          'class_output': keras.losses.CategoricalCrossentropy(from_logits=True)},\n",
    "    metrics={'score_output': [keras.metrics.MeanAbsolutePercentageError(),\n",
    "                              keras.metrics.MeanAbsoluteError()],\n",
    "             'class_output': [keras.metrics.CategoricalAccuracy()]},\n",
    "             loss_weights={'score_output': 2., 'class_output': 1.})\n",
    "'''"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将数据合适地传递到多输入或多输出模型的工作方式与在编译中指定损失函数的方式类似：你可以传递 numpy 数组的列表（与1：1映射到接收损失函数的输出）或指令将输出名称映射到训练数据的 numpy 数组："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.compile(\n",
    "    optimizer=keras.optimizers.RMSprop(1e-3),\n",
    "    loss=[keras.losses.MeanSquaredError(),\n",
    "          keras.losses.CategoricalCrossentropy(from_logits=True)])\n",
    "\n",
    "img_data = np.random.random_sample(size=(100, 32, 32, 3))\n",
    "ts_data = np.random.random_sample(size=(100, 20, 10))\n",
    "score_targets = np.random.random_sample(size=(100, 1))\n",
    "class_targets = np.random.random_sample(size=(100, 5))\n",
    "\n",
    "model.fit([img_data, ts_data], [score_targets, class_targets],\n",
    "          batch_size=32,\n",
    "          epochs=3)\n",
    "\n",
    "model.fit({'img_input': img_data, 'ts_input': ts_data},\n",
    "          {'score_output': score_targets, 'class_output': class_targets},\n",
    "          batch_size=32,\n",
    "          epochs=3)\n",
    "\n",
    "# 这是数据集的用例：与我们对 numpy 数组所做的类似，数据集应返回一个字典元组：\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices(\n",
    "    ({'img_input': img_data, 'ts_input': ts_data},\n",
    "     {'score_output': score_targets, 'class_output': class_targets}))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)\n",
    "\n",
    "model.fit(train_dataset, epochs=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用回调\n",
    "Keras 中的回调是在训练过程中（在某个时期开始时，在批处理结束时，在某个时期结束时等）在不同时间点调用的对象，可用于实现以下行为： \n",
    "1. 在训练过程中的不同时间点进行验证（除了内置的按时间段验证） \n",
    "2. 定期或在超过特定精度阈值时对模型进行检查 \n",
    "3. 当训练似乎停滞不前时，更改模型的学习率 \n",
    "4. 当训练似乎停滞不前时，对顶层进行微调 \n",
    "5. 在培训结束或超出特定性能阈值时发送电子邮件或即时消息通知……"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 40000 samples, validate on 10000 samples\n",
      "Epoch 1/20\n",
      "40000/40000 [==============================] - 5s 115us/sample - loss: 0.3709 - sparse_categorical_accuracy: 0.8956 - val_loss: 0.2374 - val_sparse_categorical_accuracy: 0.9275\n",
      "Epoch 2/20\n",
      "40000/40000 [==============================] - 5s 113us/sample - loss: 0.1771 - sparse_categorical_accuracy: 0.9482 - val_loss: 0.1827 - val_sparse_categorical_accuracy: 0.9470\n",
      "Epoch 3/20\n",
      "40000/40000 [==============================] - 4s 101us/sample - loss: 0.1305 - sparse_categorical_accuracy: 0.9609 - val_loss: 0.1747 - val_sparse_categorical_accuracy: 0.9473\n",
      "Epoch 4/20\n",
      "40000/40000 [==============================] - 4s 112us/sample - loss: 0.1049 - sparse_categorical_accuracy: 0.9683 - val_loss: 0.1508 - val_sparse_categorical_accuracy: 0.9547\n",
      "Epoch 5/20\n",
      "40000/40000 [==============================] - 4s 108us/sample - loss: 0.0842 - sparse_categorical_accuracy: 0.9748 - val_loss: 0.1486 - val_sparse_categorical_accuracy: 0.9567\n",
      "Epoch 6/20\n",
      "40000/40000 [==============================] - 4s 98us/sample - loss: 0.0724 - sparse_categorical_accuracy: 0.9787 - val_loss: 0.1419 - val_sparse_categorical_accuracy: 0.9585\n",
      "Epoch 00006: early stopping\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x1c2011d5188>"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = get_compiled_model()\n",
    "\n",
    "callbacks = [\n",
    "    keras.callbacks.EarlyStopping(\n",
    "        # 当val_loss不再改善时停止训练\n",
    "        monitor='val_loss',\n",
    "        #  \"不再改善\" 被定义为 \"减少不超过1e-2\"\n",
    "        min_delta=1e-2,\n",
    "        # \"不再改善\" 进一步定义为 \"至少2个epoch\"\n",
    "        patience=2,\n",
    "        verbose=1)\n",
    "]\n",
    "model.fit(x_train, y_train,\n",
    "          epochs=20,\n",
    "          batch_size=64,\n",
    "          callbacks=callbacks,\n",
    "          validation_split=0.2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 内置的回调\n",
    "1. ModelCheckpoint：定期保存模型\n",
    "2. EarlyStopping：当训练不再改善验证指标时，停止训练\n",
    "3. TensorBoard：定期编写可在TensorBoard中可视化的模型日志\n",
    "4. CSVLogger：将损失和指标数据流式传输到CSV文件……\n",
    "\n",
    "#### 自定义回调\n",
    "你可以通过扩展基类 keras.callbacks.Callback 来创建自定义回调。\n",
    "\n",
    "回调可以通过类属性 self.model 访问其关联的模型。 这是一个简单的示例，在训练过程中保存了每批次损失值的列表："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class LossHistory(keras.callbacks.Callback):\n",
    "\n",
    "    def on_train_begin(self, logs):\n",
    "        self.losses = []\n",
    "\n",
    "    def on_batch_end(self, batch, logs):\n",
    "        self.losses.append(logs.get('loss'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 检查点模型\n",
    "在相对较大的数据集上训练模型时，至关重要的是要定期保存模型的检查点。 最简单的方法是使用ModelCheckpoint 回调：\n",
    "\n",
    "> 有关序列化和保存的完整指南，请参见 [Guide to Saving and Serializing Models](https://tensorflow.google.cn/guide/keras/save_and_serialize)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = get_compiled_model()\n",
    "\n",
    "callbacks = [\n",
    "    keras.callbacks.ModelCheckpoint(\n",
    "        # 保存路径\n",
    "        filepath='mymodel_{epoch}',\n",
    "        # 以下两个参数表示仅当且仅当 ＃`val_loss`得分提高了，将覆盖当前检查点。\n",
    "        save_best_only=True,\n",
    "        monitor='val_loss',\n",
    "        verbose=1)\n",
    "]\n",
    "model.fit(x_train, y_train,\n",
    "          epochs=3,\n",
    "          batch_size=64,\n",
    "          callbacks=callbacks,\n",
    "          validation_split=0.2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用学习率时间表\n",
    "训练深度学习模型的常见模式是随着训练的进行逐渐减少学习。这通常称为 \"学习率衰减\"。 \n",
    "\n",
    "学习衰减进度表可以是静态的（根据当前纪元或当前批次索引预先确定），也可以是动态的（响应于模型的当前行为，尤其是验证损失）。\n",
    "\n",
    "内置的时间表：ExponentialDecay，PiecewiseConstantDecay，PolynomialDecay和InverseTimeDecay。\n",
    "\n",
    "#### 将时间表传递给优化器\n",
    "通过在优化中传递时间表对象作为 learning_rate 参数，可以轻松使用静态学习率衰减时间表。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "initial_learning_rate = 0.1\n",
    "lr_schedule = keras.optimizers.schedules.ExponentialDecay(\n",
    "    initial_learning_rate,\n",
    "    decay_steps=100000,\n",
    "    decay_rate=0.96,\n",
    "    staircase=True)\n",
    "\n",
    "optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 使用回调实现动态学习率计划\n",
    "由于优化程序无法访问验证指标，因此无法使用这些计划对象来实现动态学习率计划（例如，当验证损失不再改善时降低学习率）。 \n",
    "\n",
    "但是，回调确实可以访问所有指标，包括验证指标。因此，你可以通过使用回调来修改优化程序上的当前学习率，从而实现此模式。实际上，它甚至是作为 ReduceLROnPlateau 回调内置的。\n",
    "\n",
    "### 可视化训练过程中的损失值和指标\n",
    "在训练过程中关注模型的最好方法是使用 tensorboard，这是一个基于浏览器的应用程序，可以在本地运行，为你提供： \n",
    "1. 实时损失图以及用于评估和评估的指标 \n",
    "2.（可选）可视化图层激活的直方图 \n",
    "3. （可选）您的嵌入层学习的嵌入空间的3D可视化 \n",
    "\n",
    "如果你已通过pip安装TensorFlow，则应该能够从命令行启动TensorBoard：\n",
    "```sh\n",
    "tensorboard --logdir=/full_path_to_your_logs\n",
    "```\n",
    "\n",
    "#### 使用 tensorboard回调\n",
    "将 tensorboard 与 Keras模型和 fit() 的结合使用的最简单方法是 tensorboard 回调。 在最简单的情况下，只需指定你希望回调写日志的位置，就可以了："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tensorboard_cbk = keras.callbacks.TensorBoard(log_dir='/full_path_to_your_logs')\n",
    "model.fit(dataset, epochs=10, callbacks=[tensorboard_cbk])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "tensorboard 回调具有许多有用的选项，包括是否记录嵌入，直方图以及写入日志的频率："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "keras.callbacks.TensorBoard(\n",
    "  log_dir='/full_path_to_your_logs',\n",
    "  histogram_freq=0,  # 记录直方图可视化的频率\n",
    "  embeddings_freq=0,  # 记录可视化嵌入的频率\n",
    "  update_freq='epoch')  # 写入日志的频率（默认值：每个时期一次）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从头开始编写自己的训练和评估循环\n",
    "如果你想使训练和评估循环的层次比 fit() 和 evaluate() 提供的层次低，则应该编写自己的方法。\n",
    "### 使用GradientTape：第一个端到端示例\n",
    "在 GradientTape 范围内调用模型使你能够检索与损失值相关的层的可训练权值的梯度。使用优化器实例，可以使用这些梯度来更新这些变量（可以使用model.trainable_weights进行检索）。 \n",
    "\n",
    "让我们重用第一部分中的初始MNIST模型，并使用带有训练循环的小批量梯度对其进行训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建这个模型\n",
    "inputs = keras.Input(shape=(784,), name='digits')\n",
    "x = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "x = layers.Dense(64, activation='relu', name='dense_2')(x)\n",
    "outputs = layers.Dense(10, name='predictions')(x)\n",
    "model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "\n",
    "# 初始化优化器\n",
    "optimizer = keras.optimizers.SGD(learning_rate=1e-3)\n",
    "# 初始化损失函数\n",
    "loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n",
    "\n",
    "# 准备训练集\n",
    "batch_size = 64\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)\n",
    "\n",
    "# 对一些  epochs进行循环训练\n",
    "epochs = 3\n",
    "for epoch in range(epochs):\n",
    "  print('Start of epoch %d' % (epoch,))\n",
    "\n",
    "  # 遍历数据集的批次\n",
    "  for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n",
    "\n",
    "    # 打开 GradientTape 以记录运行的操作，在正向传递过程中，启用自动区分。\n",
    "    with tf.GradientTape() as tape:\n",
    "\n",
    "      # 进行前向传播。 该层对其输入进行的操作将记录在GradientTape上。\n",
    "      logits = model(x_batch_train, training=True)\n",
    "\n",
    "      # 计算此小批量的损失值。\n",
    "      loss_value = loss_fn(y_batch_train, logits)\n",
    "\n",
    "    # 使用梯度带自动检索与损失相关的可训练变量的梯度。\n",
    "    grads = tape.gradient(loss_value, model.trainable_weights)\n",
    "\n",
    "    # 通过更新变量的值来运行梯度下降的一个步骤，以最小化损失。\n",
    "    optimizer.apply_gradients(zip(grads, model.trainable_weights))\n",
    "\n",
    "    # 每200个批次记录一次。\n",
    "    if step % 200 == 0:\n",
    "        print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))\n",
    "        print('Seen so far: %s samples' % ((step + 1) * 64))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 指标的低级处理\n",
    "让我们将指标添加到组合中。你可以在从头开始编写的训练循环中随时使用内置指标（或您编写的自定义指标）。流程如下：\n",
    "1. 在循环开始时实例化指标\n",
    "2. 每批之后调用 metric.update_state()\n",
    "3. 当需要显示度量标准的当前值时，调用 metric.result()\n",
    "4. 需要清除指标状态时（通常在纪元末尾），调用 metric.reset_states()\n",
    "\n",
    "让我们使用这些知识在每个时期结束时根据验证数据计算 SparseCategoricalAccuracy："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始时期 0\n",
      "每批次每步损失值 0: 2.386007308959961\n",
      "到目前为止看过: 64 samples\n",
      "每批次每步损失值 200: 2.205928325653076\n",
      "到目前为止看过: 12864 samples\n",
      "每批次每步损失值 400: 2.150188446044922\n",
      "到目前为止看过: 25664 samples\n",
      "每批次每步损失值 600: 2.0384743213653564\n",
      "到目前为止看过: 38464 samples\n",
      "每个时期后训练精度为: 0.2849400043487549\n",
      "验证集精度: 0.5062000155448914\n",
      "开始时期 1\n",
      "每批次每步损失值 0: 1.9268288612365723\n",
      "到目前为止看过: 64 samples\n",
      "每批次每步损失值 200: 1.8547382354736328\n",
      "到目前为止看过: 12864 samples\n",
      "每批次每步损失值 400: 1.7548692226409912\n",
      "到目前为止看过: 25664 samples\n",
      "每批次每步损失值 600: 1.5342637300491333\n",
      "到目前为止看过: 38464 samples\n",
      "每个时期后训练精度为: 0.6088399887084961\n",
      "验证集精度: 0.7113000154495239\n",
      "开始时期 2\n",
      "每批次每步损失值 0: 1.4145616292953491\n",
      "到目前为止看过: 64 samples\n",
      "每批次每步损失值 200: 1.2527430057525635\n",
      "到目前为止看过: 12864 samples\n",
      "每批次每步损失值 400: 1.1811316013336182\n",
      "到目前为止看过: 25664 samples\n",
      "每批次每步损失值 600: 1.0681416988372803\n",
      "到目前为止看过: 38464 samples\n",
      "每个时期后训练精度为: 0.7368000149726868\n",
      "验证集精度: 0.7858999967575073\n"
     ]
    }
   ],
   "source": [
    "inputs = keras.Input(shape=(784,), name='digits')\n",
    "x = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "x = layers.Dense(64, activation='relu', name='dense_2')(x)\n",
    "outputs = layers.Dense(10, name='predictions')(x)\n",
    "model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "\n",
    "optimizer = keras.optimizers.SGD(learning_rate=1e-3)\n",
    "loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n",
    "\n",
    "# 准备指标\n",
    "train_acc_metric = keras.metrics.SparseCategoricalAccuracy()\n",
    "val_acc_metric = keras.metrics.SparseCategoricalAccuracy()\n",
    "\n",
    "batch_size = 64\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)\n",
    "\n",
    "val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))\n",
    "val_dataset = val_dataset.batch(64)\n",
    "\n",
    "epochs = 3\n",
    "for epoch in range(epochs):\n",
    "  print('--' * 10)\n",
    "  print('开始时期 %d' % (epoch,))\n",
    "\n",
    "  for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n",
    "    with tf.GradientTape() as tape:\n",
    "      logits = model(x_batch_train)\n",
    "      loss_value = loss_fn(y_batch_train, logits)\n",
    "    grads = tape.gradient(loss_value, model.trainable_weights)\n",
    "    optimizer.apply_gradients(zip(grads, model.trainable_weights))\n",
    "\n",
    "    # 更新训练指标\n",
    "    train_acc_metric(y_batch_train, logits)\n",
    "\n",
    "    if step % 200 == 0:\n",
    "        print('每批次每步损失值 %s: %s' % (step, float(loss_value)))\n",
    "        print('到目前为止看过: %s samples' % ((step + 1) * 64))\n",
    "\n",
    "  # 每次时期最后显示质保\n",
    "  train_acc = train_acc_metric.result()\n",
    "  print('每个时期后训练精度为: %s' % (float(train_acc),))\n",
    "  # 在每个时期结束时重置训练指标\n",
    "  train_acc_metric.reset_states()\n",
    "\n",
    "  # 在每个时期结束时运行一个验证循环\n",
    "  for x_batch_val, y_batch_val in val_dataset:\n",
    "    val_logits = model(x_batch_val)\n",
    "    # 更新价格指标\n",
    "    val_acc_metric(y_batch_val, val_logits)\n",
    "  val_acc = val_acc_metric.result()\n",
    "  val_acc_metric.reset_states()\n",
    "  print('验证集精度: %s' % (float(val_acc),))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 额外损失的低级处理\n",
    "在上一节中看到，可以通过在 call 方法中调用 self.add_loss(value) ，可以通过层添加正则化损失。。 在一般情况下，将需要在训练循环中考虑这些损失（除非自己编写模型并且已经知道它不会造成这种损失）。 \n",
    "\n",
    "在上一节中的示例，其中包含一个会产生正则化损失的层："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ActivityRegularizationLayer(layers.Layer):\n",
    "\n",
    "  def call(self, inputs):\n",
    "    self.add_loss(1e-2 * tf.reduce_sum(inputs))\n",
    "    return inputs\n",
    "\n",
    "inputs = keras.Input(shape=(784,), name='digits')\n",
    "x = layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
    "# 将活动正则化插入为一层\n",
    "x = ActivityRegularizationLayer()(x)\n",
    "x = layers.Dense(64, activation='relu', name='dense_2')(x)\n",
    "outputs = layers.Dense(10, name='predictions')(x)\n",
    "\n",
    "model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "\n",
    "# 当你像这样调用一个模型\n",
    "logits = model(x_train)\n",
    "\n",
    "# 它在前向传递过程中产生的损失将添加到model.losses属性中\n",
    "logits = model(x_train[:64])\n",
    "print(model.losses)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "跟踪的损失首先在模型 __call__ 的开始处清除，因此你只会看到在此前向传递过程中产生的损失。例如，反复调用模型然后查询损失仅显示在上一次调用期间创建的最新损失："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "logits = model(x_train[:64])\n",
    "logits = model(x_train[64: 128])\n",
    "logits = model(x_train[128: 192])\n",
    "print(model.losses)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "要在训练过程中考虑这些损失，要做的就是修改训练循环以将sum（model.losses）加到总损失中："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = keras.optimizers.SGD(learning_rate=1e-3)\n",
    "\n",
    "epochs = 3\n",
    "for epoch in range(epochs):\n",
    "  print('Start of epoch %d' % (epoch,))\n",
    "\n",
    "  for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n",
    "    with tf.GradientTape() as tape:\n",
    "      logits = model(x_batch_train)\n",
    "      loss_value = loss_fn(y_batch_train, logits)\n",
    "\n",
    "      # 添加在此向前传递过程中创建的额外损失：\n",
    "      loss_value += sum(model.losses)\n",
    "\n",
    "    grads = tape.gradient(loss_value, model.trainable_weights)\n",
    "    optimizer.apply_gradients(zip(grads, model.trainable_weights))\n",
    "\n",
    "    if step % 200 == 0:\n",
    "       print('每批次每步损失值 %s: %s' % (step, float(loss_value)))\n",
    "        print('到目前为止看过: %s samples' % ((step + 1) * 64))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
