{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注：本系列所有博客将持续更新并发布在[github](https://github.com/ChenHuabin321/tensorflow2_tutorials)上，您可以通过[github](https://github.com/ChenHuabin321/tensorflow2_tutorials)下载本系列所有文章笔记文件。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1 神器级的TensorBoard"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorBoard是TensorFlow中的又一神器级工具，想用户提供了模型可视化的功能。我们都知道，在构建神经网络模型时，只要模型开始训练，很多细节对外界来说都是不可见的，参数如何变化，准确率怎么样了，loss还在减小吗，这些问题都很难弄明白。但是，TensorBoard通过结合web应用为我们提供了这一功能，它将模型训练过程的细节以图表的形式通过浏览器可视化得展现在我们眼前，通过这种方式我们可以清晰感知weight、bias、accuracy的变化，把握训练的趋势。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "本文介绍两种使用TensorBoard的方式。不过，无论使用那种方式，请先启动TensorBoard的web应用，这个web应用读取模型训练时的日志数据，每隔30秒更新到网页端。在TensorFlow2.0中，TensorBoard是默认安装好的，所以，可以直接根据以下命令启动："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "tensorboard --logdir \"/home/chb/jupyter/logs\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "logdir指的是日志目录，每次训练模型时，TensorBoard会在日志目录中创建一个子目录，在其中写入日志，TensorBoard的web应用正是通过日志来感知模型的训练状态，然后更新到网页端。\n",
    "\n",
    "如果命令成功运行，可以通过本地的6006端口打开网页，但是，此时打开的页面时下面这个样子,因为还没有开始训练模型，更没有将日志写入到指定的目录。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "要将训练数据写入指定目录就必须将TensorBoard嵌入模型的训练过程，TensorFlow介绍了两种方式。下面，我们通过mnist数据集训练过程来介绍着两种方式。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2 在Model.fit()中使用TensorBoard"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "import tensorboard\n",
    "import datetime"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "mnist = tf.keras.datasets.mnist\n",
    "\n",
    "(x_train, y_train),(x_test, y_test) = mnist.load_data()\n",
    "x_train, x_test = x_train / 255.0, x_test / 255.0\n",
    "\n",
    "def create_model():\n",
    "  return tf.keras.models.Sequential([\n",
    "    tf.keras.layers.Flatten(input_shape=(28, 28)),\n",
    "    tf.keras.layers.Dense(512, activation='relu'),\n",
    "    tf.keras.layers.Dropout(0.2),\n",
    "    tf.keras.layers.Dense(10, activation='softmax')\n",
    "  ])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 60000 samples, validate on 10000 samples\n",
      "Epoch 1/5\n",
      "60000/60000 [==============================] - 4s 71us/sample - loss: 0.2186 - accuracy: 0.9349 - val_loss: 0.1180 - val_accuracy: 0.9640\n",
      "Epoch 2/5\n",
      "60000/60000 [==============================] - 4s 66us/sample - loss: 0.0972 - accuracy: 0.9706 - val_loss: 0.0754 - val_accuracy: 0.9764\n",
      "Epoch 3/5\n",
      "60000/60000 [==============================] - 4s 66us/sample - loss: 0.0685 - accuracy: 0.9781 - val_loss: 0.0696 - val_accuracy: 0.9781\n",
      "Epoch 4/5\n",
      "60000/60000 [==============================] - 4s 66us/sample - loss: 0.0527 - accuracy: 0.9831 - val_loss: 0.0608 - val_accuracy: 0.9808\n",
      "Epoch 5/5\n",
      "60000/60000 [==============================] - 4s 66us/sample - loss: 0.0444 - accuracy: 0.9859 - val_loss: 0.0637 - val_accuracy: 0.9803\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x7f9b690893d0>"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = create_model()\n",
    "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
    "\n",
    "# 定义日志目录，必须是启动web应用时指定目录的子目录，建议使用日期时间作为子目录名\n",
    "log_dir=\"/home/chb/jupyter/logs/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n",
    "tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)  # 定义TensorBoard对象\n",
    "\n",
    "model.fit(x=x_train, \n",
    "          y=y_train, \n",
    "          epochs=5, \n",
    "          validation_data=(x_test, y_test), \n",
    "          callbacks=[tensorboard_callback])  # 将定义好的TensorBoard对象作为回调传给fit方法，这样就将TensorBoard嵌入了模型训练过程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过TensorBoard提供的图标，我们可以清楚的知道训练模型时loss和accuracy在每一个epoch中是怎么变化的，甚至，在网页菜单栏我们可以看到，TensorBoard提供了查看其他内容的功能：\n",
    "- 在 scalars 下可以看到 accuracy，cross entropy，dropout，bias，weights 等的趋势。\n",
    "\n",
    "- 在 images 和 audio 下可以看到输入的数据。\n",
    "\n",
    "- 在 graphs 中可以看到模型的结构。\n",
    "\n",
    "- 在 histogram 可以看到 activations，gradients 或者 weights 等变量的每一步的分布，越靠前面就是越新的步数的结果。\n",
    "\n",
    "- distribution 和 histogram 是两种不同的形式，可以看到整体的状况。\n",
    "\n",
    "- 在 embedding 中可以看到用 PCA 主成分分析方法将高维数据投影到 3D 空间后的数据的关系。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这就是TensorBoard提供的功能，不可为不强大。这里，我们在介绍一下TensorBoard构造方法中的参数：\n",
    "工具在Tensorflow中是非常常用的其参数解释如下：\n",
    "\n",
    "- log_dir：保存TensorBoard要解析的日志文件的目录的路径。\n",
    "- histogram_freq：频率（在epoch中），计算模型层的激活和权重直方图。如果设置为0，则不会计算直方图。必须为直方图可视化指定验证数据（或拆分）。\n",
    "- write_graph：是否在TensorBoard中可视化图像。当write_graph设置为True时，日志文件可能会变得非常大。\n",
    "- write_grads：是否在TensorBoard中可视化渐变直方图。 histogram_freq必须大于0。\n",
    "- batch_size：用以直方图计算的传入神经元网络输入批的大小。\n",
    "- write_images：是否在TensorBoard中编写模型权重以显示为图像。\n",
    "- embeddings_freq：将保存所选嵌入层的频率（在epoch中）。如果设置为0，则不会计算嵌入。要在TensorBoard的嵌入选项卡中显示的数据必须作为embeddings_data传递。\n",
    "- embeddings_layer_names：要关注的层名称列表。如果为None或空列表，则将监测所有嵌入层。\n",
    "- embeddings_metadata：将层名称映射到文件名的字典，其中保存了此嵌入层的元数据。如果相同的元数据文件用于所有嵌入层，则可以传递字符串。\n",
    "- embeddings_data：要嵌入在embeddings_layer_names指定的层的数据。Numpy数组（如果模型有单个输入）或Numpy数组列表（如果模型有多个输入）。\n",
    "- update_freq：‘batch’或’epoch’或整数。使用’batch’时，在每个batch后将损失和指标写入TensorBoard。这同样适用’epoch’。如果使用整数，比方说1000，回调将会在每1000个样本后将指标和损失写入TensorBoard。请注意，过于频繁地写入TensorBoard会降低您的训练速度。\n",
    "还有可能引发的异常：\n",
    "- ValueError：如果设置了histogram_freq且未提供验证数据。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3 在其他功能函数中嵌入TensorBoard"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在训练模型时，我们可以在 tf.GradientTape()等等功能函数中个性化得通过tf.summary()方法指定需要TensorBoard展示的参数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "同样适用fashion_mnist数据集建立一个模型："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "import datetime\n",
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "from tensorflow.keras import datasets, layers, optimizers, Sequential ,metrics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def preprocess(x, y):\n",
    "    x = tf.cast(x, dtype=tf.float32) / 255.\n",
    "    y = tf.cast(y, dtype=tf.int32)\n",
    "    return x, y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "(x, y), (x_test, y_test) = datasets.fashion_mnist.load_data()\n",
    "db = tf.data.Dataset.from_tensor_slices((x, y))\n",
    "db = db.map(preprocess).shuffle(10000).batch(128)\n",
    "db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test))\n",
    "db_test = db_test.map(preprocess).batch(128)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_3\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "dense_15 (Dense)             multiple                  200960    \n",
      "_________________________________________________________________\n",
      "dense_16 (Dense)             multiple                  32896     \n",
      "_________________________________________________________________\n",
      "dense_17 (Dense)             multiple                  8256      \n",
      "_________________________________________________________________\n",
      "dense_18 (Dense)             multiple                  2080      \n",
      "_________________________________________________________________\n",
      "dense_19 (Dense)             multiple                  330       \n",
      "=================================================================\n",
      "Total params: 244,522\n",
      "Trainable params: 244,522\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = Sequential([\n",
    "    layers.Dense(256, activation=tf.nn.relu),  #  [b, 784]  --> [b, 256]\n",
    "    layers.Dense(128, activation=tf.nn.relu),  #  [b, 256]  --> [b, 128]\n",
    "    layers.Dense(64, activation=tf.nn.relu),  #  [b, 128]  --> [b, 64]\n",
    "    layers.Dense(32, activation=tf.nn.relu),  #  [b, 64]  --> [b, 32]\n",
    "    layers.Dense(10)  #  [b, 32]  --> [b, 10]\n",
    "    ]\n",
    ")\n",
    "model.build(input_shape=[None,28*28])\n",
    "model.summary()\n",
    "optimizer = optimizers.Adam(lr=1e-3)#1e-3\n",
    "# 指定日志目录\n",
    "log_dir=\"/home/chb/jupyter/logs/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n",
    "summary_writer = tf.summary.create_file_writer(log_dir)  # 创建日志文件句柄"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "db_iter = iter(db)\n",
    "images = next(db_iter)\n",
    "# 必须进行reshape，第一个纬度是图片数量或者说簇大小，28*28是图片大小，1是chanel，因为只灰度图片所以是1\n",
    "images = tf.reshape(x, (-1, 28, 28, 1))  \n",
    "with summary_writer.as_default():  # 将第一个簇的图片写入TensorBoard\n",
    "    tf.summary.image('Training data', images, max_outputs=5, step=0)   # max_outputs设置最大显示图片数量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 train_loss: 0.004282377735277017 test_acc: 0.8435\n",
      "1 train_loss: 0.0029437638364732265 test_acc: 0.8635\n",
      "2 train_loss: 0.0025979293311635654 test_acc: 0.858\n",
      "3 train_loss: 0.0024499946276346843 test_acc: 0.8698\n",
      "4 train_loss: 0.0022926158788303536 test_acc: 0.8777\n",
      "5 train_loss: 0.002190616005907456 test_acc: 0.8703\n",
      "6 train_loss: 0.0020421392366290095 test_acc: 0.8672\n",
      "7 train_loss: 0.001972314653545618 test_acc: 0.8815\n",
      "8 train_loss: 0.0018821696805457274 test_acc: 0.882\n",
      "9 train_loss: 0.0018143038821717104 test_acc: 0.8874\n",
      "10 train_loss: 0.0017742110469688972 test_acc: 0.8776\n",
      "11 train_loss: 0.0017088291154553493 test_acc: 0.8867\n",
      "12 train_loss: 0.0016564140267670154 test_acc: 0.8883\n",
      "13 train_loss: 0.001609446036318938 test_acc: 0.8853\n",
      "14 train_loss: 0.0015313156222303709 test_acc: 0.8939\n",
      "15 train_loss: 0.0014887714397162199 test_acc: 0.8793\n",
      "16 train_loss: 0.001450310030952096 test_acc: 0.8853\n",
      "17 train_loss: 0.001389076333368818 test_acc: 0.892\n",
      "18 train_loss: 0.0013547154798482855 test_acc: 0.892\n",
      "19 train_loss: 0.0013331565233568351 test_acc: 0.8879\n",
      "20 train_loss: 0.001276270254018406 test_acc: 0.8919\n",
      "21 train_loss: 0.001228199392867585 test_acc: 0.8911\n",
      "22 train_loss: 0.0012089030482495824 test_acc: 0.8848\n",
      "23 train_loss: 0.0011713500657429298 test_acc: 0.8822\n",
      "24 train_loss: 0.0011197352315609655 test_acc: 0.8898\n",
      "25 train_loss: 0.0011078068762707214 test_acc: 0.8925\n",
      "26 train_loss: 0.0010750674727062384 test_acc: 0.8874\n",
      "27 train_loss: 0.0010422117731223503 test_acc: 0.8917\n",
      "28 train_loss: 0.0010244071063275138 test_acc: 0.8851\n",
      "29 train_loss: 0.0009715937084207933 test_acc: 0.8929\n"
     ]
    }
   ],
   "source": [
    "tf.summary.trace_on(graph=True, profiler=True)\n",
    "for epoch in range(30):\n",
    "    train_loss = 0\n",
    "    train_num = 0\n",
    "    for step, (x, y) in enumerate(db):\n",
    "\n",
    "        x = tf.reshape(x, [-1, 28*28])\n",
    "\n",
    "        with tf.GradientTape() as tape: \n",
    "            logits = model(x)\n",
    "            y_onehot = tf.one_hot(y,depth=10)\n",
    "\n",
    "            loss_mse = tf.reduce_mean(tf.losses.MSE(y_onehot, logits))\n",
    "            loss_ce = tf.losses.categorical_crossentropy(y_onehot, logits, from_logits=True)\n",
    "            \n",
    "            loss_ce = tf.reduce_mean(loss_ce)  # 计算整个簇的平均loss\n",
    "        grads = tape.gradient(loss_ce, model.trainable_variables)  # 计算梯度\n",
    "        optimizer.apply_gradients(zip(grads, model.trainable_variables)) # 更新梯度\n",
    "\n",
    "        train_loss += float(loss_ce)\n",
    "        train_num += x.shape[0]\n",
    "        \n",
    "    loss = train_loss / train_num  # 计算每一次迭代的平均loss\n",
    "    with summary_writer.as_default():  # 将loss写入TensorBoard\n",
    "        tf.summary.scalar('train_loss', train_loss, step=epoch)\n",
    "\n",
    "    total_correct = 0\n",
    "    total_num = 0\n",
    "    for x,y in db_test:  # 用测试集验证每一次迭代后的准确率\n",
    "        x = tf.reshape(x, [-1, 28*28])\n",
    "        logits = model(x)\n",
    "        prob = tf.nn.softmax(logits, axis=1)\n",
    "        pred = tf.argmax(prob, axis=1)\n",
    "        pred = tf.cast(pred, dtype=tf.int32)\n",
    "        correct = tf.equal(pred, y)\n",
    "        correct = tf.reduce_sum(tf.cast(correct, dtype=tf.int32))\n",
    "\n",
    "        total_correct += int(correct)\n",
    "        total_num += x.shape[0]\n",
    "    acc = total_correct / total_num  # 平均准确率\n",
    "    with summary_writer.as_default():  # 将acc写入TensorBoard\n",
    "        tf.summary.scalar('test_acc', acc, step=epoch)\n",
    "    print(epoch, 'train_loss:',loss,'test_acc:', acc)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorBoard中web界面如下："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "除了上述示例中使用过的scalar、image之外，summary模块还提供了替他数据嵌入TensorBoard方法："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**参考**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "https://tensorflow.google.cn/tensorboard/get_started\n",
    "\n",
    "http://www.imooc.com/article/49359\n",
    "\n",
    "https://www.jianshu.com/p/7f728730c488\n",
    "\n",
    "https://blog.csdn.net/z_feng12489/article/details/89920398#_103"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "tensorflow_gpu",
   "language": "python",
   "name": "tensorflow_gpu"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
