{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 多GPU计算-单机多卡启动并行训练\n",
    "\n",
    "本节中我们将展示如何使用多块GPU计算，例如，使用多块GPU训练同一个模型。正如所期望的那样，运行本节中的程序需要至少2块GPU(大部分例子可以在单GPU测试通过)。事实上，一台机器上安装多块GPU很常见，这是因为主板上通常会有多个PCIe插槽。如果正确安装了NVIDIA驱动，我们可以通过`nvidia-smi`命令来查看当前计算机上的全部GPU。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wed Nov 18 09:09:58 2020       \r\n",
      "+-----------------------------------------------------------------------------+\r\n",
      "| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |\r\n",
      "|-------------------------------+----------------------+----------------------+\r\n",
      "| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\r\n",
      "| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\r\n",
      "|===============================+======================+======================|\r\n",
      "|   0  Tesla V100-SXM2...  On   | 00000000:06:00.0 Off |                    0 |\r\n",
      "| N/A   36C    P0    39W / 300W |      0MiB / 32480MiB |      0%      Default |\r\n",
      "+-------------------------------+----------------------+----------------------+\r\n",
      "                                                                               \r\n",
      "+-----------------------------------------------------------------------------+\r\n",
      "| Processes:                                                       GPU Memory |\r\n",
      "|  GPU       PID   Type   Process name                             Usage      |\r\n",
      "|=============================================================================|\r\n",
      "|  No running processes found                                                 |\r\n",
      "+-----------------------------------------------------------------------------+\r\n"
     ]
    }
   ],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "飞桨2.0增加paddle.distributed.spawn函数来启动单机多卡训练，同时原有的paddle.distributed.launch的方式依然保留。\n",
    "\n",
    "1. paddle.distributed.launch通过指定启动的程序文件，以文件为单位启动多进程来实现多卡同步训练。在aistudio脚本任务说明里，就是推荐这种方法启动多卡任务。launch这种方式对进程管理要求较高。\n",
    "1. paddle.distributed.spawn是以function函数为单位启动多进程来实现多卡同步的，可以更好地控制进程，在日志打印、训练退出时更友好。这是当前推荐的用法。\n",
    "下面分别介绍这两种方法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 飞桨单机多卡启动方式1、launch启动\n",
    "### 使用高层API的场景\n",
    "飞桨框架2.0全新推出高层API，是对飞桨API的进一步封装与升级，提供了更加简洁易用的API，进一步提升了飞桨的易学易用性，并增强飞桨的功能。\n",
    "* 当调用paddle.Model高层API来实现训练时，想要启动单机多卡训练非常简单，代码不需要做任何修改，只需要在启动时增加一下参数-m paddle.distributed.launch。\n",
    "\n",
    "        #单机单卡启动，默认使用第0号卡\n",
    "        $ python train.py\n",
    "        \n",
    "        #单机多卡启动，默认使用当前可见的所有卡\n",
    "        $ python -m paddle.distributed.launch train.py\n",
    "\n",
    "        #单机多卡启动，设置当前使用的第0号和第1号卡\n",
    "        $ python -m paddle.distributed.launch --selected_gpus='0,1' train.py\n",
    "\n",
    "        #单机多卡启动，设置当前使用第0号和第1号卡\n",
    "        $ export CUDA_VISIABLE_DEVICES='0,1'\n",
    "        $ python -m paddle.distributed.launch train.py\n",
    "\n",
    "* 下面是一个高阶API的例子代码，直接执行cell代码框，就会在根目录生成hapitrain.py文件，然后就可以使用python来启动这个训练了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting hapitrain.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile hapitrain.py \n",
    "import paddle \n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "lenet = paddle.vision.models.LeNet()\n",
    "\n",
    "# Mnist继承paddle.nn.Layer属于Net，model包含了训练功能\n",
    "model = paddle.Model(lenet)\n",
    "\n",
    "# 设置训练模型所需的optimizer, loss, metric\n",
    "model.prepare(\n",
    "    paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()),\n",
    "    paddle.nn.CrossEntropyLoss(),\n",
    "    paddle.metric.Accuracy(topk=(1, 2))\n",
    "    )\n",
    "\n",
    "# 启动训练\n",
    "model.fit(train_dataset, epochs=1, batch_size=64, log_freq=400)\n",
    "\n",
    "# 启动评估\n",
    "model.evaluate(test_dataset, log_freq=100, batch_size=64)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-images-idx3-ubyte.gz \n",
      "Begin to download\n",
      "\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-labels-idx1-ubyte.gz \n",
      "Begin to download\n",
      "........\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-images-idx3-ubyte.gz \n",
      "Begin to download\n",
      "\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-labels-idx1-ubyte.gz \n",
      "Begin to download\n",
      "..\n",
      "Download finished\n",
      "W1118 09:10:18.732419   139 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:10:18.755846   139 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "Epoch 1/1\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  return (isinstance(seq, collections.Sequence) and\n",
      "step 400/938 - loss: 0.2925 - acc_top1: 0.8588 - acc_top2: 0.9318 - 6ms/step\n",
      "step 800/938 - loss: 0.0961 - acc_top1: 0.9016 - acc_top2: 0.9572 - 5ms/step\n",
      "step 938/938 - loss: 0.2453 - acc_top1: 0.9098 - acc_top2: 0.9615 - 5ms/step\n",
      "Eval begin...\n",
      "step 100/157 - loss: 0.0057 - acc_top1: 0.9469 - acc_top2: 0.9841 - 3ms/step\n",
      "step 157/157 - loss: 0.0034 - acc_top1: 0.9558 - acc_top2: 0.9861 - 3ms/step\n",
      "Eval samples: 10000\n"
     ]
    }
   ],
   "source": [
    "# 单机单卡启动，默认使用第0号卡\n",
    "!python hapitrain.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed./opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py:125: RuntimeWarning: 'paddle.distributed.launch' found in sys.modules after import of package 'paddle.distributed', but prior to execution of 'paddle.distributed.launch'; this may result in unpredictable behaviour\n",
      "  warn(RuntimeWarning(msg))\n",
      "-----------  Configuration Arguments -----------\n",
      "cluster_node_ips: 127.0.0.1\n",
      "log_dir: None\n",
      "log_level: 20\n",
      "node_ip: 127.0.0.1\n",
      "print_config: True\n",
      "selected_gpus: None\n",
      "started_port: None\n",
      "training_script: hapitrain.py\n",
      "training_script_args: []\n",
      "use_paddlecloud: False\n",
      "------------------------------------------------\n",
      "INFO 2020-11-18 09:17:03,360 launch.py:216] get cluster from args:job_server:None pods:['rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:[\\'0\\'] endpoint:127.0.0.1:36665 rank:0\"]'] job_stage_flag:None hdfs:None\n",
      "INFO 2020-11-18 09:17:03,361 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'hapitrain.py'] env:{'FLAGS_selected_gpus': '0', 'PADDLE_TRAINER_ID': '0', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:36665', 'PADDLE_TRAINERS_NUM': '1', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:36665'}\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 09:17:08.271320   453 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:17:08.275751   453 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "Epoch 1/1\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  return (isinstance(seq, collections.Sequence) and\n",
      "step 400/938 - loss: 0.6140 - acc_top1: 0.8581 - acc_top2: 0.9339 - 5ms/step\n",
      "step 800/938 - loss: 0.0859 - acc_top1: 0.8993 - acc_top2: 0.9583 - 5ms/step\n",
      "step 938/938 - loss: 0.1046 - acc_top1: 0.9066 - acc_top2: 0.9624 - 5ms/step\n",
      "Eval begin...\n",
      "step 100/157 - loss: 0.0626 - acc_top1: 0.9517 - acc_top2: 0.9844 - 3ms/step\n",
      "step 157/157 - loss: 0.0261 - acc_top1: 0.9594 - acc_top2: 0.9873 - 3ms/step\n",
      "Eval samples: 10000\n",
      "INFO 2020-11-18 09:17:18,384 launch.py:235] Local procs complete, POD info:rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:['0'] endpoint:127.0.0.1:36665 rank:0\"]\n"
     ]
    }
   ],
   "source": [
    "# 单机多卡启动，默认使用当前可见的所有卡\n",
    "!python -m paddle.distributed.launch hapitrain.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed./opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py:125: RuntimeWarning: 'paddle.distributed.launch' found in sys.modules after import of package 'paddle.distributed', but prior to execution of 'paddle.distributed.launch'; this may result in unpredictable behaviour\n",
      "  warn(RuntimeWarning(msg))\n",
      "-----------  Configuration Arguments -----------\n",
      "cluster_node_ips: 127.0.0.1\n",
      "log_dir: None\n",
      "log_level: 20\n",
      "node_ip: 127.0.0.1\n",
      "print_config: True\n",
      "selected_gpus: 0,1\n",
      "started_port: None\n",
      "training_script: hapitrain.py\n",
      "training_script_args: []\n",
      "use_paddlecloud: False\n",
      "------------------------------------------------\n",
      "INFO 2020-11-18 09:17:19,852 launch.py:216] get cluster from args:job_server:None pods:['rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:[\\'0\\'] endpoint:127.0.0.1:46376 rank:0\", \"gpu:[\\'1\\'] endpoint:127.0.0.1:37226 rank:1\"]'] job_stage_flag:None hdfs:None\n",
      "INFO 2020-11-18 09:17:19,852 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'hapitrain.py'] env:{'FLAGS_selected_gpus': '0', 'PADDLE_TRAINER_ID': '0', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:46376', 'PADDLE_TRAINERS_NUM': '2', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:46376,127.0.0.1:37226'}\n",
      "INFO 2020-11-18 09:17:19,861 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'hapitrain.py'] env:{'FLAGS_selected_gpus': '1', 'PADDLE_TRAINER_ID': '1', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:37226', 'PADDLE_TRAINERS_NUM': '2', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:46376,127.0.0.1:37226'}\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 09:17:24.750614   531 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:17:24.755164   531 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "Traceback (most recent call last):\n",
      "  File \"hapitrain.py\", line 5, in <module>\n",
      "    lenet = paddle.vision.models.LeNet()\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/vision/models/lenet.py\", line 42, in __init__\n",
      "    1, 6, 3, stride=1, padding=1),\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/conv.py\", line 576, in __init__\n",
      "    data_format=data_format)\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/conv.py\", line 111, in __init__\n",
      "    shape=filter_shape, attr=self._param_attr)\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py\", line 389, in create_parameter\n",
      "    default_initializer)\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layer_helper_base.py\", line 372, in create_parameter\n",
      "    **attr._to_kwargs(with_initializer=True))\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py\", line 2877, in create_parameter\n",
      "    initializer(param, self)\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/initializer.py\", line 568, in __call__\n",
      "    stop_gradient=True)\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py\", line 2977, in _prepend_op\n",
      "    kwargs.get(\"stop_gradient\", False))\n",
      "  File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/tracer.py\", line 45, in trace_op\n",
      "    not stop_gradient)\n",
      "paddle.fluid.core_avx.EnforceNotMet: \n",
      "----------------------\n",
      "Error Message Summary:\n",
      "----------------------\n",
      "UnimplementedError: Place CUDAPlace(0) is not supported. Please check that your paddle compiles with WITH_GPU or WITH_XPU option or check that your train process hold the correct gpu_id if you use Executor. (at /paddle/paddle/fluid/platform/device_context.cc:69)\n",
      "  [operator < uniform_random > error]\n",
      "\n",
      "----------------------\n",
      "Error Message Summary:\n",
      "----------------------\n",
      "FatalError: A serious error (Termination signal) is detected by the operating system. (at /paddle/paddle/fluid/platform/init.cc:303)\n",
      "  [TimeInfo: *** Aborted at 1605662245 (unix time) try \"date -d @1605662245\" if you are using GNU date ***]\n",
      "  [SignalInfo: *** SIGTERM (@0x3e8000001da) received by PID 531 (TID 0x7f53b5cd1700) from PID 474 ***]\n",
      "\n",
      "INFO 2020-11-18 09:17:28,876 utils.py:275] terminate all the procs\n",
      "ERROR 2020-11-18 09:17:28,876 utils.py:445] ABORT!!! Out of all 2 trainers, the trainer process with rank=[1] was aborted. Please check its log.\n",
      "INFO 2020-11-18 09:17:31,879 utils.py:275] terminate all the procs\n"
     ]
    }
   ],
   "source": [
    "# 单机多卡启动，设置当前使用的第0号和第1号卡 意料之中的，在单卡机器上会报错\n",
    "!python -m paddle.distributed.launch --selected_gpus='0,1' hapitrain.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed./opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py:125: RuntimeWarning: 'paddle.distributed.launch' found in sys.modules after import of package 'paddle.distributed', but prior to execution of 'paddle.distributed.launch'; this may result in unpredictable behaviour\n",
      "  warn(RuntimeWarning(msg))\n",
      "-----------  Configuration Arguments -----------\n",
      "cluster_node_ips: 127.0.0.1\n",
      "log_dir: None\n",
      "log_level: 20\n",
      "node_ip: 127.0.0.1\n",
      "print_config: True\n",
      "selected_gpus: None\n",
      "started_port: None\n",
      "training_script: hapitrain.py\n",
      "training_script_args: []\n",
      "use_paddlecloud: False\n",
      "------------------------------------------------\n",
      "INFO 2020-11-18 09:17:33,340 launch.py:216] get cluster from args:job_server:None pods:['rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:[\\'0\\'] endpoint:127.0.0.1:38317 rank:0\"]'] job_stage_flag:None hdfs:None\n",
      "INFO 2020-11-18 09:17:33,341 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'hapitrain.py'] env:{'FLAGS_selected_gpus': '0', 'PADDLE_TRAINER_ID': '0', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:38317', 'PADDLE_TRAINERS_NUM': '1', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:38317'}\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 09:17:38.307148   643 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:17:38.311512   643 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "Epoch 1/1\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  return (isinstance(seq, collections.Sequence) and\n",
      "step 400/938 - loss: 0.4510 - acc_top1: 0.8706 - acc_top2: 0.9384 - 6ms/step\n",
      "step 800/938 - loss: 0.2371 - acc_top1: 0.9078 - acc_top2: 0.9606 - 5ms/step\n",
      "step 938/938 - loss: 0.0258 - acc_top1: 0.9151 - acc_top2: 0.9644 - 5ms/step\n",
      "Eval begin...\n",
      "step 100/157 - loss: 0.0347 - acc_top1: 0.9511 - acc_top2: 0.9838 - 3ms/step\n",
      "step 157/157 - loss: 0.0023 - acc_top1: 0.9593 - acc_top2: 0.9866 - 3ms/step\n",
      "Eval samples: 10000\n",
      "INFO 2020-11-18 09:17:51,366 launch.py:235] Local procs complete, POD info:rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:['0'] endpoint:127.0.0.1:38317 rank:0\"]\n"
     ]
    }
   ],
   "source": [
    "# 单机多卡启动，设置当前使用第0号和第1号卡 aistudio单卡也可以运行，可以看到launch的容错率较高\n",
    "!CUDA_VISIABLE_DEVICES='0,1' && python -m paddle.distributed.launch hapitrain.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用基础API场景\n",
    "* 如果使用基础API的代码程序启动单机多卡训练，则需要对单机单卡的代码进行三处修改，具体看下面未改变版本和改变版本的三处修改对比：\n",
    "\n",
    "\n",
    "* #第1处改动，import库\n",
    "\n",
    "import paddle.distributed as dist \n",
    "* #第2处改动，初始化并行环境\n",
    "\n",
    "dist.init_parallel_env()\n",
    "* #第3处改动，增加paddle.DataParallel封装\n",
    "\n",
    "net = paddle.DataParallel(paddle.vision.models.LeNet()) \n",
    "\n",
    "执行下面cell将代码保存为normaltrain.py文件，然后像高层代码那样使用python -m paddle.distributed.launch启动。\n",
    "\n",
    "当然，因为改动了代码，所以这个程序是无法使用普通模式python normaltrain.py运行的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting normaltrain.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile normaltrain.py \n",
    "import paddle #这是有3处改动的版本\n",
    "import paddle.distributed as dist #第1处改动，导入库\n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "\n",
    "# 加载训练集 batch_size 设为 64\n",
    "train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True)\n",
    "\n",
    "def train():\n",
    "    # 第2处改动，初始化并行环境\n",
    "    dist.init_parallel_env()\n",
    "\n",
    "    # 第3处改动，增加paddle.DataParallel封装\n",
    "    net = paddle.DataParallel(paddle.vision.models.LeNet()) \n",
    "    epochs = 1\n",
    "    adam = paddle.optimizer.Adam(learning_rate=0.001, parameters=net.parameters())\n",
    "    # 用Adam作为优化函数\n",
    "    for epoch in range(epochs):\n",
    "        for batch_id, data in enumerate(train_loader()):\n",
    "            x_data = data[0]\n",
    "            y_data = data[1]\n",
    "            predicts = net(x_data)  \n",
    "            acc = paddle.metric.accuracy(predicts, y_data, k=2)\n",
    "            avg_acc = paddle.mean(acc)\n",
    "            loss = paddle.nn.functional.cross_entropy(predicts, y_data, reduction='mean') \n",
    "            loss.backward() \n",
    "            if batch_id % 400 == 0:\n",
    "                print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, loss.numpy(), avg_acc.numpy())) #这里手册误写成了avg_loss\n",
    "            adam.step()\n",
    "            adam.clear_grad()\n",
    "# 启动训练\n",
    "train()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed./opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py:125: RuntimeWarning: 'paddle.distributed.launch' found in sys.modules after import of package 'paddle.distributed', but prior to execution of 'paddle.distributed.launch'; this may result in unpredictable behaviour\n",
      "  warn(RuntimeWarning(msg))\n",
      "-----------  Configuration Arguments -----------\n",
      "cluster_node_ips: 127.0.0.1\n",
      "log_dir: None\n",
      "log_level: 20\n",
      "node_ip: 127.0.0.1\n",
      "print_config: True\n",
      "selected_gpus: None\n",
      "started_port: None\n",
      "training_script: normaltrain.py\n",
      "training_script_args: []\n",
      "use_paddlecloud: False\n",
      "------------------------------------------------\n",
      "INFO 2020-11-18 09:23:53,088 launch.py:216] get cluster from args:job_server:None pods:['rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:[\\'0\\'] endpoint:127.0.0.1:44792 rank:0\"]'] job_stage_flag:None hdfs:None\n",
      "INFO 2020-11-18 09:23:53,088 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'normaltrain.py'] env:{'FLAGS_selected_gpus': '0', 'PADDLE_TRAINER_ID': '0', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:44792', 'PADDLE_TRAINERS_NUM': '1', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:44792'}\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 09:23:58.055003   915 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:23:58.059552   915 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "epoch: 0, batch_id: 0, loss is: [51.764183], acc is: [0.15625]\n",
      "epoch: 0, batch_id: 400, loss is: [0.29813138], acc is: [0.984375]\n",
      "epoch: 0, batch_id: 800, loss is: [0.09248035], acc is: [0.984375]\n",
      "INFO 2020-11-18 09:24:08,113 launch.py:235] Local procs complete, POD info:rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:['0'] endpoint:127.0.0.1:44792 rank:0\"]\n"
     ]
    }
   ],
   "source": [
    "# 单机多卡启动，默认使用当前可见的所有卡\n",
    "!python -m paddle.distributed.launch normaltrain.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed./opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py:125: RuntimeWarning: 'paddle.distributed.launch' found in sys.modules after import of package 'paddle.distributed', but prior to execution of 'paddle.distributed.launch'; this may result in unpredictable behaviour\n",
      "  warn(RuntimeWarning(msg))\n",
      "-----------  Configuration Arguments -----------\n",
      "cluster_node_ips: 127.0.0.1\n",
      "log_dir: None\n",
      "log_level: 20\n",
      "node_ip: 127.0.0.1\n",
      "print_config: True\n",
      "selected_gpus: 0,1\n",
      "started_port: None\n",
      "training_script: normaltrain.py\n",
      "training_script_args: []\n",
      "use_paddlecloud: False\n",
      "------------------------------------------------\n",
      "INFO 2020-11-18 09:29:08,443 launch.py:216] get cluster from args:job_server:None pods:['rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:[\\'0\\'] endpoint:127.0.0.1:60920 rank:0\", \"gpu:[\\'1\\'] endpoint:127.0.0.1:47294 rank:1\"]'] job_stage_flag:None hdfs:None\n",
      "INFO 2020-11-18 09:29:08,444 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'normaltrain.py'] env:{'FLAGS_selected_gpus': '0', 'PADDLE_TRAINER_ID': '0', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:60920', 'PADDLE_TRAINERS_NUM': '2', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:60920,127.0.0.1:47294'}\n",
      "INFO 2020-11-18 09:29:08,453 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'normaltrain.py'] env:{'FLAGS_selected_gpus': '1', 'PADDLE_TRAINER_ID': '1', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:47294', 'PADDLE_TRAINERS_NUM': '2', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:60920,127.0.0.1:47294'}\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.server not ready, wait 3 sec to retry...\n",
      "not ready endpoints:['127.0.0.1:60920']\n",
      "server not ready, wait 3 sec to retry...\n",
      "not ready endpoints:['127.0.0.1:60920']\n",
      "E1118 09:29:16.585847  1337 pybind.cc:1389] Invalid CUDAPlace(1), must inside [0, 1), because GPU number on your machine is 1\n",
      "\n",
      "----------------------\n",
      "Error Message Summary:\n",
      "----------------------\n",
      "FatalError: A serious error (Termination signal) is detected by the operating system. (at /paddle/paddle/fluid/platform/init.cc:303)\n",
      "  [TimeInfo: *** Aborted at 1605662957 (unix time) try \"date -d @1605662957\" if you are using GNU date ***]\n",
      "  [SignalInfo: *** SIGTERM (@0x3e8000004ff) received by PID 1336 (TID 0x7fdc4830b700) from PID 1279 ***]\n",
      "\n",
      "INFO 2020-11-18 09:29:20,473 utils.py:275] terminate all the procs\n",
      "ERROR 2020-11-18 09:29:20,473 utils.py:445] ABORT!!! Out of all 2 trainers, the trainer process with rank=[1] was aborted. Please check its log.\n",
      "INFO 2020-11-18 09:29:23,477 utils.py:275] terminate all the procs\n"
     ]
    }
   ],
   "source": [
    "# 单机多卡启动，设置当前使用的第0号和第1号卡 意料之中在单卡机器上会报错\n",
    "!python -m paddle.distributed.launch --selected_gpus='0,1' normaltrain.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed./opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py:125: RuntimeWarning: 'paddle.distributed.launch' found in sys.modules after import of package 'paddle.distributed', but prior to execution of 'paddle.distributed.launch'; this may result in unpredictable behaviour\n",
      "  warn(RuntimeWarning(msg))\n",
      "-----------  Configuration Arguments -----------\n",
      "cluster_node_ips: 127.0.0.1\n",
      "log_dir: None\n",
      "log_level: 20\n",
      "node_ip: 127.0.0.1\n",
      "print_config: True\n",
      "selected_gpus: None\n",
      "started_port: None\n",
      "training_script: normaltrain.py\n",
      "training_script_args: []\n",
      "use_paddlecloud: False\n",
      "------------------------------------------------\n",
      "INFO 2020-11-18 09:29:24,901 launch.py:216] get cluster from args:job_server:None pods:['rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:[\\'0\\'] endpoint:127.0.0.1:53202 rank:0\"]'] job_stage_flag:None hdfs:None\n",
      "INFO 2020-11-18 09:29:24,902 utils.py:379] start trainer proc:['/opt/conda/envs/python35-paddle120-env/bin/python', '-u', 'normaltrain.py'] env:{'FLAGS_selected_gpus': '0', 'PADDLE_TRAINER_ID': '0', 'PADDLE_CURRENT_ENDPOINT': '127.0.0.1:53202', 'PADDLE_TRAINERS_NUM': '1', 'PADDLE_TRAINER_ENDPOINTS': '127.0.0.1:53202'}\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 09:29:29.862380  1447 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:29:29.866731  1447 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "epoch: 0, batch_id: 0, loss is: [45.305645], acc is: [0.296875]\n",
      "epoch: 0, batch_id: 400, loss is: [0.04563246], acc is: [1.]\n",
      "epoch: 0, batch_id: 800, loss is: [0.29160503], acc is: [0.953125]\n",
      "INFO 2020-11-18 09:29:39,926 launch.py:235] Local procs complete, POD info:rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:[\"gpu:['0'] endpoint:127.0.0.1:53202 rank:0\"]\n"
     ]
    }
   ],
   "source": [
    "# 单机多卡启动，设置当前使用第0号和第1号卡\n",
    "!CUDA_VISIABLE_DEVICES='0,1' && python -m paddle.distributed.launch normaltrain.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 飞桨单机多卡启动方式2、spawn启动\n",
    "launch方式启动训练，以文件为单位启动多进程，需要用户在启动时调用paddle.distributed.launch，对于进程的管理要求较高。飞桨框架2.0版本增加了spawn启动方式，可以更好地控制进程，在日志打印、训练退出时更友好。\n",
    "\n",
    "飞桨spawn启动命令为python train.py, 代码部分只需要在训练函数外加上一层dist.spawn()就行了：\n",
    "\n",
    "        # 启动train多进程训练，默认使用所有可见的GPU卡\n",
    "        if __name__ == '__main__':\n",
    "            dist.spawn(train)\n",
    "\n",
    "        # 启动train函数2个进程训练，默认使用当前可见的前2张卡\n",
    "        if __name__ == '__main__':\n",
    "            dist.spawn(train, nprocs=2)\n",
    "\n",
    "        # 启动train函数2个进程训练，默认使用第4号和第5号卡\n",
    "        if __name__ == '__main__':\n",
    "            dist.spawn(train, nprocs=2, selelcted_gpus='4,5')\n",
    "            \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 飞桨spawn方式启动GPU多卡高层API程序"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting hapispawn.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile hapispawn.py\n",
    "import paddle\n",
    "import paddle.distributed as dist\n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "lenet = paddle.vision.models.LeNet()\n",
    "\n",
    "# Mnist继承paddle.nn.Layer属于Net，model包含了训练功能\n",
    "model = paddle.Model(lenet)\n",
    "\n",
    "# 设置训练模型所需的optimizer, loss, metric\n",
    "model.prepare(\n",
    "    paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()),\n",
    "    paddle.nn.CrossEntropyLoss(),\n",
    "    paddle.metric.Accuracy(topk=(1, 2))\n",
    "    )\n",
    "def train():\n",
    "    # 启动训练\n",
    "    model.fit(train_dataset, epochs=1, batch_size=64, log_freq=400)\n",
    "\n",
    "    # 启动评估\n",
    "    model.evaluate(test_dataset, log_freq=100, batch_size=64)\n",
    "    \n",
    "if __name__ == '__main__':\n",
    "    dist.spawn(train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 09:41:40.214654  1892 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:41:40.219010  1892 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 09:41:49.485471  1983 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 09:41:49.489800  1983 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "Epoch 1/1\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  return (isinstance(seq, collections.Sequence) and\n",
      "step 400/938 - loss: 0.1314 - acc_top1: 0.8718 - acc_top2: 0.9395 - 5ms/step\n",
      "step 800/938 - loss: 0.0904 - acc_top1: 0.9091 - acc_top2: 0.9612 - 5ms/step\n",
      "step 938/938 - loss: 0.0111 - acc_top1: 0.9161 - acc_top2: 0.9650 - 5ms/step\n",
      "Eval begin...\n",
      "step 100/157 - loss: 0.0807 - acc_top1: 0.9513 - acc_top2: 0.9864 - 3ms/step\n",
      "step 157/157 - loss: 0.0013 - acc_top1: 0.9568 - acc_top2: 0.9882 - 3ms/step\n",
      "Eval samples: 10000\n"
     ]
    }
   ],
   "source": [
    "!python hapispawn.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 飞桨spawn方式启动GPU多卡基础API程序"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting normalspawn.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile normalspawn.py \n",
    "import paddle #基础API未改动版本\n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "lenet = paddle.vision.models.LeNet()\n",
    "\n",
    "# 加载训练集 batch_size 设为 64\n",
    "train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True)\n",
    "\n",
    "def train():\n",
    "    epochs = 1\n",
    "    adam = paddle.optimizer.Adam(learning_rate=0.001, parameters=lenet.parameters())\n",
    "    # 用Adam作为优化函数\n",
    "    for epoch in range(epochs):\n",
    "        for batch_id, data in enumerate(train_loader()):\n",
    "            x_data, y_data = data\n",
    "            predicts = lenet(x_data)\n",
    "            loss = paddle.nn.functional.cross_entropy(predicts, y_data, reduction='mean')\n",
    "            acc = paddle.metric.accuracy(predicts, y_data, k=1)\n",
    "            avg_acc = paddle.mean(acc)\n",
    "            loss.backward()\n",
    "            if batch_id % 400 == 0:\n",
    "                print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, loss.numpy(), avg_acc.numpy()))\n",
    "            adam.step()\n",
    "            adam.clear_grad()\n",
    "\n",
    "# 启动train多进程训练，默认使用所有可见的GPU卡\n",
    "import paddle.distributed as dist\n",
    "if __name__ == '__main__':\n",
    "    dist.spawn(train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 10:01:54.840170  2609 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 10:01:54.844610  2609 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 10:02:04.151060  2669 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 10:02:04.155709  2669 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "epoch: 0, batch_id: 0, loss is: [51.331646], acc is: [0.046875]\n",
      "epoch: 0, batch_id: 400, loss is: [0.07309438], acc is: [0.953125]\n",
      "epoch: 0, batch_id: 800, loss is: [0.15749319], acc is: [0.9375]\n"
     ]
    }
   ],
   "source": [
    "!python normalspawn.py "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "同样的，launch启动方式里修改过的基础API版本程序，加上dist.spawn代码部分，在spawn方式下也能正常运行。当然我们不需要如此画蛇添足。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting normal3spawn.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile normal3spawn.py \n",
    "import paddle #这是有3处改动的版本\n",
    "import paddle.distributed as dist #第1处改动，import库\n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "\n",
    "# 加载训练集 batch_size 设为 64\n",
    "train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True)\n",
    "\n",
    "def train():\n",
    "    # 第2处改动，初始化并行环境\n",
    "    dist.init_parallel_env()\n",
    "\n",
    "    # 第3处改动，增加paddle.DataParallel封装\n",
    "    net = paddle.DataParallel(paddle.vision.models.LeNet()) #手册这里没有写全LeNet的库路径\n",
    "    epochs = 1\n",
    "    adam = paddle.optimizer.Adam(learning_rate=0.001, parameters=net.parameters())\n",
    "    # 用Adam作为优化函数\n",
    "    for epoch in range(epochs):\n",
    "        for batch_id, data in enumerate(train_loader()):\n",
    "            x_data = data[0]\n",
    "            y_data = data[1]\n",
    "            predicts = net(x_data)  \n",
    "            acc = paddle.metric.accuracy(predicts, y_data, k=2)\n",
    "            avg_acc = paddle.mean(acc)\n",
    "            loss = paddle.nn.functional.cross_entropy(predicts, y_data, reduction='mean') \n",
    "            loss.backward() #这里手册误写成了avg_loss\n",
    "            if batch_id % 400 == 0:\n",
    "                print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, loss.numpy(), avg_acc.numpy())) #这里手册误写成了avg_loss\n",
    "            adam.step()\n",
    "            adam.clear_grad()\n",
    "\n",
    "\n",
    "# 启动train多进程训练，默认使用所有可见的GPU卡\n",
    "import paddle.distributed as dist\n",
    "if __name__ == '__main__':\n",
    "    dist.spawn(train)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1118 10:19:23.493568  3337 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1118 10:19:23.498428  3337 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n"
     ]
    }
   ],
   "source": [
    "!python normal3spawn.py "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 飞桨2.0单机多GPU卡并行计算总结：\n",
    "\n",
    "飞桨2.0在并行计算方面有着完备的解决方案，且是经过超大规模业务数据检验过的训练框架。并行计算，就是这么简单！\n",
    "\n",
    "## 针对单机多卡的情况，优先推荐使用spawn方式\n",
    "\n",
    "spawn的优点是：几乎不需要修改代码，只要导入spawn库：import paddle.distributed as dist，并在最后用dist.spawn去调用训练函数即可。同时spawn方式可以更好地控制进程，在日志打印、训练退出时更友好\n",
    "\n",
    "原始程序几乎不需要改变，假设训练函数是train，程序中只需要增加以下三句：\n",
    "\n",
    "        import paddle.distributed as dist\n",
    "        if __name__ == '__main__':\n",
    "            dist.spawn(train)\n",
    "            \n",
    "然后直接运行python train.py启动训练即可\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
