{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "\n",
    "# 8.5多机多卡并行计算-飞桨Fleet分布式训练\n",
    "\n",
    "MXNet版本原内容：8.5 多GPU计算的简洁实现 链接：http://zh.gluon.ai/chapter_computational-performance/multiple-gpus-gluon.html\n",
    "这节重写，主要为飞桨Fleet分布式训练内容。\n",
    "\n",
    "飞桨的分布式训练技术源自百度的业务实践，是经过超大规模业务数据检验过的训练框架。\n",
    "\n",
    "飞桨分布式训练经过实践检验的应用领域包括自然语言处理，计算机视觉，搜索，推荐等。\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/22def4b5870a430e8378a0c2337641f5ca65fa8a98614fb59cd2f804a4e67064)\n",
    "\n",
    "## 飞桨完备的并行模式：\n",
    "\n",
    "* 数据并行：针对产业界最常用的数据并行模式，飞桨针对实际业务需求重点打磨多项技术，包括；飞桨提供集合通信架构和参数服务器架构两种方式，支持工业实践中常见的同步训练和异步训练的机制，并提供收敛效果有保障的分布式优化算法。\n",
    "* \n",
    "* 流水线并行：面向异构硬件，流水线并行能够将模型计算部分拆分到不同硬件并充分流水线化，从而大规模提升异构硬件的整体利用率。\n",
    "* \n",
    "* 模型并行：对于超大规模分类问题，飞桨提供计算与存储同时并行的模型并行，解决单GPU无法解决的问题。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 使用fleetrun启动分布式任务\n",
    "\n",
    "Paddle提供命令行启动命令fleetrun，配合Paddle的分布式高级APIpaddle.distributed.fleet 即可轻松启动Paddle集合通信模式或参数服务器模式下的分布式任务。 fleetrun在静态图和动态图场景下均可使用。\n",
    "\n",
    "注：目前paddle.distributed.fleet启动动态图分布式训练仅支持集合通信（Colletive Communication）模式，不支持参数服务器（Parameter-Server）模式。\n",
    "\n",
    "* GPU单机多卡训练\n",
    "\n",
    "若启动单机4卡的任务，只需通过--gpus指定空闲的4张卡即可。\n",
    "\n",
    "        fleetrun --gpus=0,1,2,3 train.py\n",
    "注：如果指定了export CUDA_VISIBLE_DEVICES=0,1,2,3，则可以直接使用：\n",
    "\n",
    "        export CUDA_VISIBLE_DEVICES=0,1,2,3\n",
    "        fleetrun train.py\n",
    "\n",
    "* GPU多机多卡训练\n",
    "\n",
    "[示例一] 2机8卡 (每个节点4卡)\n",
    "\n",
    "        fleetrun --ips=\"xx.xx.xx.xx,yy.yy.yy.yy\" --gpus=0,1,2,3 train.py\n",
    "注：如果每台机器均指定了export CUDA_VISIBLE_DEVICES=0,1,2,3，则可以直接在每台节点上启动：\n",
    "\n",
    "        export CUDA_VISIBLE_DEVICES=0,1,2,3\n",
    "        fleetrun --ips=\"xx.xx.xx.xx,yy.yy.yy.yy\" train.py\n",
    "[示例二] 2机16卡（每个节点8卡，假设每台机器均有8卡可使用）\n",
    "\n",
    "        fleetrun --ips=\"xx.xx.xx.xx,yy.yy.yy.yy\" train.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Fleet单机多卡训练\n",
    "使用Fleet接口进行动态图分布式训练其实非常的简单，只需修改3个步骤：\n",
    "\n",
    "* 导入paddle.distributed.fleet包\n",
    "\n",
    "        from paddle.distributed import fleet\n",
    "* 初始化fleet环境\n",
    "\n",
    "        fleet.init(is_collective=True)\n",
    "* 通过fleet获取分布式优化器和分布式模型\n",
    "\n",
    "        strategy = fleet.DistributedStrategy()\n",
    "        adam = fleet.distributed_optimizer(adam, strategy=strategy)\n",
    "        dp_layer = fleet.distributed_model(layer)\n",
    "### Fleet手册提供的例子        \n",
    "使用最简单的2个全连接的网络，用随机数作为输入，主要用来展示fleet分布式需要修改的三处代码："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting train_fleet.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile train_fleet.py\n",
    "# -*- coding: UTF-8 -*-\n",
    "import paddle\n",
    "import paddle.nn as nn\n",
    "#分布式step 1: 导入paddle.distributed.fleet包\n",
    "from paddle.distributed import fleet\n",
    "\n",
    "# 定义全连接网络，需继承自nn.Layer\n",
    "class LinearNet(nn.Layer):\n",
    "    def __init__(self):\n",
    "        super(LinearNet, self).__init__()\n",
    "        self._linear1 = nn.Linear(10, 10)\n",
    "        self._linear2 = nn.Linear(10, 1)\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self._linear2(self._linear1(x))\n",
    "\n",
    "\n",
    "# 1.开启动态图模式\n",
    "paddle.disable_static()\n",
    "\n",
    "# 分布式step 2: 初始化fleet\n",
    "fleet.init(is_collective=True)\n",
    "\n",
    "# 2. 定义网络对象，损失函数和优化器\n",
    "layer = LinearNet()\n",
    "loss_fn = nn.MSELoss()\n",
    "adam = paddle.optimizer.Adam(\n",
    "    learning_rate=0.001, parameters=layer.parameters())\n",
    "\n",
    "# 分布式step 3: 通过fleet获取分布式优化器和分布式模型\n",
    "strategy = fleet.DistributedStrategy()\n",
    "adam = fleet.distributed_optimizer(adam, strategy=strategy)\n",
    "dp_layer = fleet.distributed_model(layer)\n",
    "\n",
    "for step in range(20):\n",
    "    # 3. 执行前向网络\n",
    "    inputs = paddle.randn([10, 10], 'float32')\n",
    "    outputs = dp_layer(inputs)\n",
    "    labels = paddle.randn([10, 1], 'float32')\n",
    "    loss = loss_fn(outputs, labels)\n",
    "\n",
    "    print(\"step:{}\\tloss:{}\".format(step, loss.numpy()))\n",
    "\n",
    "    # 4. 执行反向计算和参数更新\n",
    "    loss.backward()\n",
    "\n",
    "    adam.step()\n",
    "    adam.clear_grad()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.-----------  Configuration Arguments -----------\n",
      "gpus: 0\n",
      "heter_worker_num: None\n",
      "heter_workers: \n",
      "http_port: None\n",
      "ips: 127.0.0.1\n",
      "log_dir: log\n",
      "server_num: None\n",
      "servers: \n",
      "training_script: train_fleet.py\n",
      "training_script_args: []\n",
      "worker_num: None\n",
      "workers: \n",
      "------------------------------------------------\n",
      "WARNING 2020-11-20 10:36:10,345 launch.py:299] Not found distinct arguments and compiled with cuda. Default use collective mode\n",
      "INFO 2020-11-20 10:36:10,347 launch_utils.py:452] Local start 1 processes. First process distributed environment info (Only For Debug): \n",
      "    +=======================================================================================+\n",
      "    |                        Distributed Envs                      Value                    |\n",
      "    +---------------------------------------------------------------------------------------+\n",
      "    |                     FLAGS_selected_gpus                        0                      |\n",
      "    |                       PADDLE_TRAINER_ID                        0                      |\n",
      "    |                 PADDLE_CURRENT_ENDPOINT                 127.0.0.1:35444               |\n",
      "    |                     PADDLE_TRAINERS_NUM                        1                      |\n",
      "    |                PADDLE_TRAINER_ENDPOINTS                 127.0.0.1:35444               |\n",
      "    +=======================================================================================+\n",
      "\n",
      "INFO 2020-11-20 10:36:10,347 launch_utils.py:456] details abouts PADDLE_TRAINER_ENDPOINTS can be found in log/endpoints.log.\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1120 10:36:11.193780   509 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1120 10:36:11.198945   509 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "step:0\tloss:[2.3428144]\n",
      "step:1\tloss:[1.1502899]\n",
      "step:2\tloss:[1.7862391]\n",
      "step:3\tloss:[1.5612096]\n",
      "step:4\tloss:[2.5750048]\n",
      "step:5\tloss:[1.6597044]\n",
      "step:6\tloss:[2.0634024]\n",
      "step:7\tloss:[0.94341743]\n",
      "step:8\tloss:[1.4847057]\n",
      "step:9\tloss:[2.6921244]\n",
      "step:10\tloss:[1.4679241]\n",
      "step:11\tloss:[1.5562341]\n",
      "step:12\tloss:[2.0853114]\n",
      "step:13\tloss:[1.6452634]\n",
      "step:14\tloss:[2.3366485]\n",
      "step:15\tloss:[1.0527436]\n",
      "step:16\tloss:[1.2281861]\n",
      "step:17\tloss:[1.7851313]\n",
      "step:18\tloss:[1.6464276]\n",
      "step:19\tloss:[1.8024923]\n",
      "INFO 2020-11-20 10:36:16,369 launch.py:223] Local processes completed.\n"
     ]
    }
   ],
   "source": [
    "!fleetrun --gpus=0 train_fleet.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 手写数字识别Fleet版本\n",
    "1. 基础API手写数字识别，也同样需要改动3处："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting normal_fleet.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile normal_fleet.py \n",
    "import paddle #这是有3处改动的版本\n",
    "#分布式step 1: 导入paddle.distributed.fleet模块\n",
    "from paddle.distributed import fleet\n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "\n",
    "# 加载训练集 batch_size 设为 64\n",
    "train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True)\n",
    "\n",
    "# 分布式step 2: 初始化fleet\n",
    "fleet.init(is_collective=True)\n",
    "\n",
    "def train():\n",
    "    \n",
    "    epochs = 1\n",
    "    net = fleet.distributed_model(paddle.vision.models.LeNet())\n",
    "    adam = paddle.optimizer.Adam(learning_rate=0.001, parameters=net.parameters())\n",
    "\n",
    "    # 分布式step 3: 通过fleet获取分布式优化器和分布式模型\n",
    "    strategy = fleet.DistributedStrategy()\n",
    "    adam = fleet.distributed_optimizer(adam, strategy=strategy)\n",
    "    net = fleet.distributed_model(paddle.vision.models.LeNet())\n",
    "\n",
    "    # 用Adam作为优化函数\n",
    "    for epoch in range(epochs):\n",
    "        for batch_id, data in enumerate(train_loader()):\n",
    "            x_data = data[0]\n",
    "            y_data = data[1]\n",
    "            predicts = net(x_data)  \n",
    "            acc = paddle.metric.accuracy(predicts, y_data, k=2)\n",
    "            avg_acc = paddle.mean(acc)\n",
    "            loss = paddle.nn.functional.cross_entropy(predicts, y_data, reduction='mean') \n",
    "            loss.backward() #这里手册误写成了avg_loss\n",
    "            if batch_id % 400 == 0:\n",
    "                print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, loss.numpy(), avg_acc.numpy())) #这里手册误写成了avg_loss\n",
    "            adam.step()\n",
    "            adam.clear_grad()\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    train()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.-----------  Configuration Arguments -----------\n",
      "gpus: 0\n",
      "heter_worker_num: None\n",
      "heter_workers: \n",
      "http_port: None\n",
      "ips: 127.0.0.1\n",
      "log_dir: log\n",
      "server_num: None\n",
      "servers: \n",
      "training_script: normal_fleet.py\n",
      "training_script_args: []\n",
      "worker_num: None\n",
      "workers: \n",
      "------------------------------------------------\n",
      "WARNING 2020-11-20 10:36:17,798 launch.py:299] Not found distinct arguments and compiled with cuda. Default use collective mode\n",
      "INFO 2020-11-20 10:36:17,800 launch_utils.py:452] Local start 1 processes. First process distributed environment info (Only For Debug): \n",
      "    +=======================================================================================+\n",
      "    |                        Distributed Envs                      Value                    |\n",
      "    +---------------------------------------------------------------------------------------+\n",
      "    |                     FLAGS_selected_gpus                        0                      |\n",
      "    |                       PADDLE_TRAINER_ID                        0                      |\n",
      "    |                 PADDLE_CURRENT_ENDPOINT                 127.0.0.1:51601               |\n",
      "    |                     PADDLE_TRAINERS_NUM                        1                      |\n",
      "    |                PADDLE_TRAINER_ENDPOINTS                 127.0.0.1:51601               |\n",
      "    +=======================================================================================+\n",
      "\n",
      "INFO 2020-11-20 10:36:17,800 launch_utils.py:456] details abouts PADDLE_TRAINER_ENDPOINTS can be found in log/endpoints.log.\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-images-idx3-ubyte.gz \n",
      "Begin to download\n",
      "\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-labels-idx1-ubyte.gz \n",
      "Begin to download\n",
      "........\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-images-idx3-ubyte.gz \n",
      "Begin to download\n",
      "\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-labels-idx1-ubyte.gz \n",
      "Begin to download\n",
      "..\n",
      "Download finished\n",
      "W1120 10:36:23.973999   617 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1120 10:36:23.978258   617 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "epoch: 0, batch_id: 0, loss is: [45.631424], acc is: [0.234375]\n",
      "epoch: 0, batch_id: 400, loss is: [45.386208], acc is: [0.15625]\n",
      "epoch: 0, batch_id: 800, loss is: [46.10372], acc is: [0.203125]\n",
      "INFO 2020-11-20 10:36:32,835 launch.py:223] Local processes completed.\n"
     ]
    }
   ],
   "source": [
    "!fleetrun --gpus=0 normal_fleet.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "2. 高层API手写数字识别例程,不需要加那三句，直接就能单机多卡（包括多机多卡）训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing hapi_fleet.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile hapi_fleet.py \n",
    "import paddle \n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "\n",
    "lenet = paddle.vision.models.LeNet()\n",
    "\n",
    "# Mnist继承paddle.nn.Layer属于Net，model包含了训练功能\n",
    "model = paddle.Model(lenet)\n",
    "\n",
    "# 设置训练模型所需的optimizer, loss, metric\n",
    "model.prepare(\n",
    "    paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()),\n",
    "    paddle.nn.CrossEntropyLoss(),\n",
    "    paddle.metric.Accuracy(topk=(1, 2))\n",
    "    )\n",
    "\n",
    "# 启动训练\n",
    "model.fit(train_dataset, epochs=1, batch_size=64, log_freq=400)\n",
    "\n",
    "# 启动评估\n",
    "model.evaluate(test_dataset, log_freq=100, batch_size=64)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.-----------  Configuration Arguments -----------\n",
      "gpus: 0\n",
      "heter_worker_num: None\n",
      "heter_workers: \n",
      "http_port: None\n",
      "ips: 127.0.0.1\n",
      "log_dir: log\n",
      "server_num: None\n",
      "servers: \n",
      "training_script: hapi_fleet.py\n",
      "training_script_args: []\n",
      "worker_num: None\n",
      "workers: \n",
      "------------------------------------------------\n",
      "WARNING 2020-11-20 10:48:42,655 launch.py:299] Not found distinct arguments and compiled with cuda. Default use collective mode\n",
      "INFO 2020-11-20 10:48:42,656 launch_utils.py:452] Local start 1 processes. First process distributed environment info (Only For Debug): \n",
      "    +=======================================================================================+\n",
      "    |                        Distributed Envs                      Value                    |\n",
      "    +---------------------------------------------------------------------------------------+\n",
      "    |                     FLAGS_selected_gpus                        0                      |\n",
      "    |                       PADDLE_TRAINER_ID                        0                      |\n",
      "    |                 PADDLE_CURRENT_ENDPOINT                 127.0.0.1:38703               |\n",
      "    |                     PADDLE_TRAINERS_NUM                        1                      |\n",
      "    |                PADDLE_TRAINER_ENDPOINTS                 127.0.0.1:38703               |\n",
      "    +=======================================================================================+\n",
      "\n",
      "INFO 2020-11-20 10:48:42,657 launch_utils.py:456] details abouts PADDLE_TRAINER_ENDPOINTS can be found in log/endpoints.log.\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1120 10:48:47.534435  1290 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1120 10:48:47.538815  1290 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "Epoch 1/1\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  return (isinstance(seq, collections.Sequence) and\n",
      "step 400/938 - loss: 0.1704 - acc_top1: 0.8720 - acc_top2: 0.9404 - 5ms/step\n",
      "step 800/938 - loss: 0.0857 - acc_top1: 0.9092 - acc_top2: 0.9613 - 5ms/step\n",
      "step 938/938 - loss: 0.0434 - acc_top1: 0.9163 - acc_top2: 0.9652 - 5ms/step\n",
      "Eval begin...\n",
      "step 100/157 - loss: 0.0321 - acc_top1: 0.9508 - acc_top2: 0.9848 - 3ms/step\n",
      "step 157/157 - loss: 0.0969 - acc_top1: 0.9557 - acc_top2: 0.9870 - 3ms/step\n",
      "Eval samples: 10000\n",
      "INFO 2020-11-20 10:48:57,691 launch.py:223] Local processes completed.\n"
     ]
    }
   ],
   "source": [
    "!fleetrun --gpus=0 hapi_fleet.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 多机多卡手写数字识别\n",
    "从单机多卡到多机多卡训练，在代码上并不需要做任何改动，只需修改启动命令，以2机4卡为例：\n",
    "\n",
    "        fleetrun --ips=\"xx.xx.xx.xx,yy.yy.yy.yy\" --gpus=0,1 dygraph_fleet.py\n",
    "        \n",
    "在2台机器上分别运行以上启动命令，fleetrun将在后台分别启动2个多进程任务，执行分布式多机训练。 您将在ip为xx.xx.xx.xx的机器上看到命令台输出日志信息。\n",
    "\n",
    "下面还是以aistudio为例子演示一下多机多卡，直接运行："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.-----------  Configuration Arguments -----------\n",
      "gpus: 0\n",
      "heter_worker_num: None\n",
      "heter_workers: \n",
      "http_port: None\n",
      "ips: 127.0.0.1\n",
      "log_dir: log\n",
      "server_num: None\n",
      "servers: \n",
      "training_script: normal_fleet.py\n",
      "training_script_args: []\n",
      "worker_num: None\n",
      "workers: \n",
      "------------------------------------------------\n",
      "INFO 2020-12-02 15:37:12,399 launch.py:289] Run collective gpu mode. gpu arguments:['--ips'], cuda count:1\n",
      "INFO 2020-12-02 15:37:12,401 launch_utils.py:452] Local start 1 processes. First process distributed environment info (Only For Debug): \n",
      "    +=======================================================================================+\n",
      "    |                        Distributed Envs                      Value                    |\n",
      "    +---------------------------------------------------------------------------------------+\n",
      "    |                     FLAGS_selected_gpus                        0                      |\n",
      "    |                       PADDLE_TRAINER_ID                        0                      |\n",
      "    |                 PADDLE_CURRENT_ENDPOINT                 127.0.0.1:33109               |\n",
      "    |                     PADDLE_TRAINERS_NUM                        1                      |\n",
      "    |                PADDLE_TRAINER_ENDPOINTS                 127.0.0.1:33109               |\n",
      "    +=======================================================================================+\n",
      "\n",
      "INFO 2020-12-02 15:37:12,401 launch_utils.py:456] details abouts PADDLE_TRAINER_ENDPOINTS can be found in log/endpoints.log.\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-images-idx3-ubyte.gz \n",
      "Begin to download\n",
      "\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-labels-idx1-ubyte.gz \n",
      "Begin to download\n",
      "........\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-images-idx3-ubyte.gz \n",
      "Begin to download\n",
      "\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-labels-idx1-ubyte.gz \n",
      "Begin to download\n",
      "..\n",
      "Download finished\n",
      "W1202 15:37:18.338985  1009 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1202 15:37:18.343415  1009 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "epoch: 0, batch_id: 0, loss is: [48.277103], acc is: [0.265625]\n",
      "epoch: 0, batch_id: 400, loss is: [46.22221], acc is: [0.265625]\n",
      "epoch: 0, batch_id: 800, loss is: [46.283947], acc is: [0.21875]\n",
      "INFO 2020-12-02 15:37:27,437 launch.py:223] Local processes completed.\n"
     ]
    }
   ],
   "source": [
    "!fleetrun --ips=\"127.0.0.1\" --gpus=0 normal_fleet.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 使用Fleet进行参数服务器训练\n",
    "在大数据浪潮的推动下，有标签训练数据的规模取得了飞速的增长。现在人们通常用数百万甚至上千万的有标签图像来训练图像分类器（如，ImageNet包含1400万幅图像，涵盖两万多个种类），用成千上万小时的语音数据来训练语音模型（如，Deep Speech 2系统使用了11940小时的语音数据以及超过200万句表述来训练语音识别模型）。在真实的业务场景中，训练数据的规模可以达到上述数据集的数十倍甚至数百倍，如此庞大的数据需要消耗大量的计算资源和训练时间使模型达到收敛状态（数天时间）。\n",
    "\n",
    "为了提高模型的训练效率，分布式训练应运而生，其中基于参数服务器的分布式训练为一种常见的中心化共享参数的同步方式。与单机训练不同的是在参数服务器分布式训练中，各个节点充当着不同的角色：\n",
    "\n",
    "训练节点：该节点负责完成数据读取、前向计算、反向梯度计算等过程，并将计算出的梯度上传至服务节点。\n",
    "\n",
    "服务节点：在收到所有训练节点传来的梯度后，该节点会将梯度聚合并更新参数。最后将参数发送给训练节点，开始新一轮的训练。\n",
    "\n",
    "根据参数更新的方式不同，可以分为同步和异步两种：\n",
    "\n",
    "同步训练：在同步参数服务器分布式训练中，所有训练节点的进度保持一致。每训练完一个Batch后，训练节点会上传梯度，然后开始等待服务节点返回更新后的参数。服务节点拿到所有训练节点上传的梯度后，才会对参数进行更新。因此，在任何一个时间点，所有训练节点都处于相同的训练阶段。\n",
    "\n",
    "异步训练：与同步训练不同，在异步训练中任何两个训练节点之间的参数更新都互不影响。每一个训练节点完成训练、上传梯度后，服务节点都会立即更新参数并将结果返回至相应的训练节点。拿到最新的参数后，该训练节点会立即开始新一轮的训练。\n",
    "\n",
    "目前飞桨动态图不支持参数服务器训练，静态图支持，具体例子略，具体参见：https://fleet-x.readthedocs.io/en/latest/paddle_fleet_rst/fleet_ps_sync_and_async_cn.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 多机多卡并行计算-飞桨Fleet分布式训练总结：\n",
    "\n",
    "飞桨2.0在并行计算方面有着完备的解决方案，且是经过超大规模业务数据检验过的训练框架。并行计算，就是这么简单！\n",
    "\n",
    "            \n",
    "## 针对多机多卡使用fleet的方式\n",
    "\n",
    "程序需要对应修改3个步骤：\n",
    "\n",
    "* 导入paddle.distributed.fleet包\n",
    "\n",
    "        from paddle.distributed import fleet\n",
    "* 初始化fleet环境\n",
    "\n",
    "        fleet.init(is_collective=True)\n",
    "* 通过fleet获取分布式优化器和分布式模型\n",
    "\n",
    "        strategy = fleet.DistributedStrategy()\n",
    "        adam = fleet.distributed_optimizer(adam, strategy=strategy)\n",
    "        dp_layer = fleet.distributed_model(layer)\n",
    "        \n",
    "* 然后运行命令：\n",
    "        fleetrun --ips=\"xx.xx.xx.xx,yy.yy.yy.yy\" --gpus=0,1 train.py\n",
    "        \n",
    "其中有几台机器就写几个ip地址，在第一台机器会输出log信息。 \n",
    "\n",
    "### 如果是使用高层API代码，则程序不用修改，直接运行`fleetrun`命令即可。\n",
    "        \n",
    "        "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 2.0.0b0 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
