{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 12.3 多GPU并行\n",
    "在12.2节中介绍了常用的分布式深度学习模型训练模式。这一节将给出具体的TensorFlow代码，**在一台机器的多个GPU上并行训练深度学习模型**。因为一般来说一台机器上的多个GPU性能相似，所以在这种设置下会更多地采用同步模式训练深度学习模型。\n",
    "\n",
    "下面将给出具体的代码，在多GPU上训练深度学习模型解决MNIST问题。本节的样例代码将沿用5.5节中使用的代码框架，并且使用5.5节中给出的mnist_inference.py程序来完成神经网络的前向传播过程。以下代码给出了新的神经网络训练程序mnist_multigpu_train.py。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# %load mnist_multigpu_train.py\n",
    "from datetime import datetime\n",
    "import os\n",
    "import time\n",
    "\n",
    "import tensorflow as tf\n",
    "import mnist_inference\n",
    "\n",
    "# 定义训练神经网络时需要用到的参数。\n",
    "BATCH_SIZE = 100 \n",
    "LEARNING_RATE_BASE = 0.001\n",
    "LEARNING_RATE_DECAY = 0.99\n",
    "REGULARAZTION_RATE = 0.0001\n",
    "TRAINING_STEPS = 1000\n",
    "MOVING_AVERAGE_DECAY = 0.99 \n",
    "N_GPU = 2\n",
    "\n",
    "# 定义日志和模型输出的路径。\n",
    "MODEL_SAVE_PATH = \"logs_and_models/\"\n",
    "MODEL_NAME = \"model.ckpt\"\n",
    "\n",
    "# 定义数据存储的路径。因为需要为不同的GPU提供不同的训练数据，所以通过placerholder\n",
    "# 的方式就需要手动准备多份数据。为了方便训练数据的获取过程，可以采用第7章中介绍的Dataset\n",
    "# 的方式从TFRecord中读取数据。于是在这里提供的数据文件路径为将MNIST训练数据\n",
    "# 转化为TFRecords格式之后的路径。如何将MNIST数据转化为TFRecord格式在第7章中有\n",
    "# 详细介绍，这里不再赘述。\n",
    "DATA_PATH = \"output.tfrecords\" \n",
    "\n",
    "# 定义输入队列得到训练数据，具体细节可以参考第7章。\n",
    "def get_input():\n",
    "    dataset = tf.contrib.data.TFRecordDataset([DATA_PATH])\n",
    "\n",
    "    # 定义数据解析格式。\n",
    "    def parser(record):\n",
    "        features = tf.parse_single_example(\n",
    "            record,\n",
    "            features={\n",
    "                'image_raw': tf.FixedLenFeature([], tf.string),\n",
    "                'pixels': tf.FixedLenFeature([], tf.int64),\n",
    "                'label': tf.FixedLenFeature([], tf.int64),\n",
    "            })\n",
    "\n",
    "        # 解析图片和标签信息。\n",
    "        decoded_image = tf.decode_raw(features['image_raw'], tf.uint8)\n",
    "        reshaped_image = tf.reshape(decoded_image, [784])\n",
    "        retyped_image = tf.cast(reshaped_image, tf.float32)\n",
    "        label = tf.cast(features['label'], tf.int32)\n",
    "\n",
    "        return retyped_image, label\n",
    "\n",
    "    # 定义输入队列。\n",
    "    dataset = dataset.map(parser)\n",
    "    dataset = dataset.shuffle(buffer_size=10000)\n",
    "    dataset = dataset.repeat(10)\n",
    "    dataset = dataset.batch(BATCH_SIZE)\n",
    "    iterator = dataset.make_one_shot_iterator()\n",
    "\n",
    "    features, labels = iterator.get_next()\n",
    "    return features, labels\n",
    "\n",
    "# 定义损失函数。对于给定的训练数据、正则化损失计算规则和命名空间，计算在这个命名空间\n",
    "# 下的总损失。之所以需要给定命名空间是因为不同的GPU上计算得出的正则化损失都会加入名为\n",
    "# loss的集合，如果不通过命名空间就会将不同GPU上的正则化损失都加进来。\n",
    "def get_loss(x, y_, regularizer, scope, reuse_variables=None):\n",
    "    # 沿用5.5节中定义的函数来计算神经网络的前向传播结果。\n",
    "    with tf.variable_scope(tf.get_variable_scope(), reuse=reuse_variables):\n",
    "        y = mnist_inference.inference(x, regularizer)\n",
    "    # 计算交叉熵损失。\n",
    "    cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(\n",
    "        logits=y, labels=y_))\n",
    "    # 计算当前GPU上计算得到的正则化损失。\n",
    "    regularization_loss = tf.add_n(tf.get_collection('losses', scope))\n",
    "    # 计算最终的总损失。\n",
    "    loss = cross_entropy + regularization_loss\n",
    "    return loss\n",
    "\n",
    "# 计算每一个变量梯度的平均值。\n",
    "def average_gradients(tower_grads):\n",
    "    average_grads = []\n",
    "\n",
    "    # 枚举所有的变量和变量在不同GPU上计算得出的梯度。\n",
    "    for grad_and_vars in zip(*tower_grads):\n",
    "        # 计算所有GPU上的梯度平均值。\n",
    "        grads = []\n",
    "        for g, _ in grad_and_vars:\n",
    "            expanded_g = tf.expand_dims(g, 0)\n",
    "            grads.append(expanded_g)\n",
    "        grad = tf.concat(grads, 0)\n",
    "        grad = tf.reduce_mean(grad, 0)\n",
    "\n",
    "        v = grad_and_vars[0][1]\n",
    "        grad_and_var = (grad, v)\n",
    "        # 将变量和它的平均梯度对应起来。\n",
    "        average_grads.append(grad_and_var)\n",
    "    # 返回所有变量的平均梯度，这个将被用于变量的更新。\n",
    "    return average_grads\n",
    "\n",
    "# 主训练过程。\n",
    "def main(argv=None): \n",
    "    # 将简单的运算放在CPU上，只有神经网络的训练过程放在GPU上。\n",
    "    with tf.Graph().as_default(), tf.device('/cpu:0'):\n",
    "        # 定义基本的训练过程\n",
    "        x, y_ = get_input()\n",
    "        regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)\n",
    "        \n",
    "        global_step = tf.get_variable('global_step', [], initializer=tf.constant_initializer(0), trainable=False)\n",
    "        learning_rate = tf.train.exponential_decay(\n",
    "            LEARNING_RATE_BASE, global_step, 60000 / BATCH_SIZE, LEARNING_RATE_DECAY)       \n",
    "        \n",
    "        opt = tf.train.GradientDescentOptimizer(learning_rate)\n",
    "        \n",
    "        tower_grads = []\n",
    "        reuse_variables = False\n",
    "        # 将神经网络的优化过程跑在不同的GPU上。\n",
    "        for i in range(N_GPU):\n",
    "            # 将优化过程指定在一个GPU上。\n",
    "            with tf.device('/gpu:%d' % i):\n",
    "                with tf.name_scope('GPU_%d' % i) as scope:\n",
    "                    cur_loss = get_loss(x, y_, regularizer, scope, reuse_variables)\n",
    "                    # 在第一次声明变量之后，将控制变量重用的参数设置为True。这样可以\n",
    "                    # 让不同的GPU更新同一组参数。\n",
    "                    reuse_variables = True\n",
    "                    grads = opt.compute_gradients(cur_loss)\n",
    "                    tower_grads.append(grads)\n",
    "        \n",
    "        # 计算变量的平均梯度。\n",
    "        grads = average_gradients(tower_grads)\n",
    "        for grad, var in grads:\n",
    "            if grad is not None:\n",
    "            \ttf.summary.histogram('gradients_on_average/%s' % var.op.name, grad)\n",
    "\n",
    "        # 使用平均梯度更新参数。\n",
    "        apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)\n",
    "        for var in tf.trainable_variables():\n",
    "            tf.summary.histogram(var.op.name, var)\n",
    "\n",
    "        # 计算变量的滑动平均值。\n",
    "        variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)\n",
    "        variables_to_average = (tf.trainable_variables() +tf.moving_average_variables())\n",
    "        variables_averages_op = variable_averages.apply(variables_to_average)\n",
    "        # 每一轮迭代需要更新变量的取值并更新变量的滑动平均值。\n",
    "        train_op = tf.group(apply_gradient_op, variables_averages_op)\n",
    "\n",
    "        saver = tf.train.Saver()\n",
    "        summary_op = tf.summary.merge_all()        \n",
    "        init = tf.global_variables_initializer()\n",
    "        with tf.Session(config=tf.ConfigProto(\n",
    "                allow_soft_placement=True, log_device_placement=True)) as sess:\n",
    "            # 初始化所有变量并启动队列。\n",
    "            init.run()\n",
    "            summary_writer = tf.summary.FileWriter(MODEL_SAVE_PATH, sess.graph)\n",
    "\n",
    "            for step in range(TRAINING_STEPS):\n",
    "                # 执行神经网络训练操作，并记录训练操作的运行时间。\n",
    "                start_time = time.time()\n",
    "                _, loss_value = sess.run([train_op, cur_loss])\n",
    "                duration = time.time() - start_time\n",
    "                \n",
    "                # 每隔一段时间数据当前的训练进度，并统计训练速度。\n",
    "                if step != 0 and step % 10 == 0:\n",
    "                    # 计算使用过的训练数据个数。因为在每一次运行训练操作时，每一个GPU\n",
    "                    # 都会使用一个batch的训练数据，所以总共用到的训练数据个数为\n",
    "                    # batch大小 × GPU个数。\n",
    "                    num_examples_per_step = BATCH_SIZE * N_GPU\n",
    "\n",
    "                    # num_examples_per_step为本次迭代使用到的训练数据个数，\n",
    "                    # duration为运行当前训练过程使用的时间，于是平均每秒可以处理的训\n",
    "                    # 练数据个数为num_examples_per_step / duration。\n",
    "                    examples_per_sec = num_examples_per_step / duration\n",
    "\n",
    "                    # duration为运行当前训练过程使用的时间，因为在每一个训练过程中，\n",
    "                    # 每一个GPU都会使用一个batch的训练数据，所以在单个batch上的训\n",
    "                    # 练所需要时间为duration / GPU个数。\n",
    "                    sec_per_batch = duration / N_GPU\n",
    "    \n",
    "                    # 输出训练信息。\n",
    "                    format_str = ('%s: step %d, loss = %.2f (%.1f examples/sec; %.3f sec/batch)')\n",
    "                    print (format_str % (datetime.now(), step, loss_value, examples_per_sec, sec_per_batch))\n",
    "                    \n",
    "                    # 通过TensorBoard可视化训练过程。\n",
    "                    summary = sess.run(summary_op)\n",
    "                    summary_writer.add_summary(summary, step)\n",
    "    \n",
    "                # 每隔一段时间保存当前的模型。\n",
    "                if step % 1000 == 0 or (step + 1) == TRAINING_STEPS:\n",
    "                    checkpoint_path = os.path.join(MODEL_SAVE_PATH, MODEL_NAME)\n",
    "                    saver.save(sess, checkpoint_path, global_step=step)\n",
    "        \n",
    "if __name__ == '__main__':\n",
    "\ttf.app.run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于本人电脑只有单个GPU这里不运行演示，在AWS的g2.8xlarge实例上可以同时使用4个GPU训练，得到类似下面的结果："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''\n",
    "step 10, loss = 71.90 (15292.3 examples/sec; 0.007 sec/batch)\n",
    "step 20, loss = 37.97 (18758.3 examples/sec; 0.005 sec/batch)\n",
    "step 30, loss = 9.54 (16313.3 examples/sec; 0.006 sec/batch)\n",
    "step 40, loss = 11.84 (14199.0 examples/sec; 0.007 sec/batch)\n",
    "...\n",
    "step 980, loss = 0.66 (15034.7 examples/sec; 0.007 sec/batch)\n",
    "step 990, loss = 1.56 (16134.1 examples/sec; 0.006 sec/batch)\n",
    "'''"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p align='center'>\n",
    "    <img src=images/图12.8.JPG>\n",
    "    <center>图12-8 在AWS的g2.8xlarge实例上运行MNIST样例程序时GPU的使用情况</center>\n",
    "</p>\n",
    "\n",
    "上图显示了运行样例代码时不同GPU的使用情况。因为在5.5节中定义的神经网络规模比较小，所以在上图中显示的GPU使用率不高。如果训练大型的神经网络模型，TensorFlow将会占满所有用到的GPU。图12.9展示了通过TensorBoard可视化得到的样例代码TensorFlow计算图，其中节点上的颜色代表了不同的设备，比如黑色代表CPU、白色代表第一个GPU，等等。从图12.9中可以看出，训练神经网络的主要过程被放到了GPU0、GPU1、GPU2和GPU3这4个模块中，而且每一个模块运行在一个GPU上。对比图12.9中的TensorFlow计算图可视化结果和图12.5中介绍的同步模式分布式深度学习训练流程图可以发现，这两个图的结构是非常接近的。\n",
    "<p align='center'>\n",
    "    <img src=images/图12.9.JPG>\n",
    "    <center>图12-9 使用了4个GPU的TensorFlow计算图可视化结果</center>\n",
    "</p>\n",
    "\n",
    "通过调整参数N_GPU ，可以实验同步模式下随着GPU个数的增加训练速度的加速比率。图12.10展示了在给出的MNIST样例代码上，训练速度随着GPU数量增加的变化趋势。从图中可以看出，当使用两个GPU时，模型的训练速度是使用一个GPU的1.92倍。**也就是说当GPU数量较小时，训练速度基本可以随着GPU的数量线性增长。**\n",
    "<p align='center'>\n",
    "    <img src=images/图12.10.JPG>\n",
    "    <center>图12-10 训练速度随着GPU数量增加的变化趋势</center>\n",
    "</p>\n",
    "\n",
    "图12.11是[Google官方](https://ai.googleblog.com/2016/04/announcing-tensorflow-08-now-with.html)展示的当GPU数量增多时，训练速度随着GPU数量增加的加速比。从图中可以看出，**当GPU数量增加时，虽然加速比不再是线性，但TensorFlow仍然可以通过增加GPU数量有效地加速深度学习模型的训练过程。**更多Google提供的性能测试可以看[这里](https://www.tensorflow.org/guide/performance/benchmarks)。\n",
    "<p align='center'>\n",
    "    <img src=images/图12.11.JPG>\n",
    "    <center>图12-11 训练速度随着GPU数量增加的变化趋势，来自谷歌官方测试数据</center>\n",
    "</p>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
