{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 异步计算  飞桨2.0版本\n",
    "动手学深度学习是一本非常好的深度学习入门教材，原版本代码是基于MXNet的，这里提供了飞桨2.0版本。由于水平有限，对异步计算的理解可能有偏差，大家有意见和建议，可以通过评论指出来，以便共同学习，一起提高！另外有兴趣一起做的朋友们也可以跟芮芮老师联系啊。\n",
    "\n",
    "原8.2节链接：\n",
    "http://zh.gluon.ai/chapter_computational-performance/async-computation.html\n",
    "\n",
    "飞桨使用了异步计算来提高计算性能。理解它的工作原理既有助于开发更高效的程序，又有助于在内存资源有限的情况下主动降低计算性能从而减小内存开销。我们从两个个方面来阐述异步计算：\n",
    "1 飞桨CPU与GPU间的异步计算\n",
    "2 异步数据加载"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 飞桨CPU与GPU间的异步计算"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "在GPU环境下，飞桨需要print或者.numpy拿回值才是计算完的，python语句执行完的时候结果并没有计算完，但这不是由飞桨提供的并行能力，而是GPU CUDA设备本身的异步计算特性，CUDA计算是先拉起kernel将任务给GPU执行，然后CPU会继续执行后面的工作，调用wait_to_read函数、waitall函数、asnumpy函数、asscalar等函数，会触发CUDA的sync操作，从而等待结果算完再继续后面的工作。\n",
    "\n",
    "但对于CPU来讲，是没有异步操作的，语句执行完就是计算完了。因此下面的例子需要在GPU环境下才能正确呈现，展示异步计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import subprocess\n",
    "import time\n",
    "import paddle\n",
    "import paddle.nn as nn"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Tensor(shape=[2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,\n",
       "       [[3., 3., 3.],\n",
       "        [3., 3., 3.]])"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a =  paddle.ones([2, 3], 'float32')\n",
    "b =  paddle.ones([2, 3], 'float32')\n",
    "c = a * b + 2\n",
    "c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "在异步计算中，Python前端线程执行前3条语句的时候，仅仅是把任务放进后端的队列里就返回了。当最后一条语句需要打印计算结果时，Python前端线程会等待C++后端线程把变量c的结果计算完。此设计的一个好处是，这里的Python前端线程不需要做实际计算。因此，无论Python的性能如何，它对整个程序性能的影响很小。只要C++后端足够高效，那么不管前端编程语言性能如何，飞桨都可以提供一致的高性能。\n",
    "\n",
    "为了演示异步计算的性能，我们先实现一个简单的计时类。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class Benchmark():  # 本类可保存在d2lzh包中方便以后使用\n",
    "    def __init__(self, prefix=None):\n",
    "        self.prefix = prefix + ' ' if prefix else ''\n",
    "\n",
    "    def __enter__(self):\n",
    "        self.start = time.time()\n",
    "\n",
    "    def __exit__(self, *args):\n",
    "        print('%stime: %.4f sec' % (self.prefix, time.time() - self.start))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "下面的例子通过计时来展示异步计算的效果。可以看到，当y = paddle.dot(x, x).sum()返回的时候并没有等待变量y真正被计算完。只有当print函数需要打印变量y时才必须等待它计算完。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "x = paddle.randn([20000, 20000]) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Workloads are queued. time: 0.0005 sec\n",
      "sum = Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=True,\n",
      "       [400000704.])\n",
      "Workloads are finished. time: 0.0016 sec\n"
     ]
    }
   ],
   "source": [
    "with Benchmark('Workloads are queued.'):    \n",
    "    y = paddle.dot(x, x).sum()\n",
    "\n",
    "with Benchmark('Workloads are finished.'):\n",
    "    print('sum =', y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "会发现这里Benchmark显示y = paddle.dot(x, x).sum()的运行时间很短，只有0.7ms，而print语句却用了2.1ms 。让我们再把代码分开成2个cell运行一次，验证一下前面的判断："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Workloads are queued. time: 0.0007 sec\n"
     ]
    }
   ],
   "source": [
    "with Benchmark('Workloads are queued.'):    \n",
    "    y = paddle.dot(x, x).sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "sum = Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=True,\n",
      "       [400000704.])\n",
      "Workloads are finished. time: 0.0009 sec\n"
     ]
    }
   ],
   "source": [
    "with Benchmark('Workloads are finished.'):\n",
    "    print('sum =', y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "可以看到，分成2个cell后两项都运行的非常快，分别是0.6ms和0.6ms，那么为什么Benchmark显示paddle.dot(x, x).sum()运行的时间这么短，print语句用时也这么短呢？就是因为是异步的，所以Benchmark的统计时间很短，GPU的运算不在任何一个cell的Benchmark的统计时间里。\n",
    "\n",
    "的确，除非我们需要打印或者保存计算结果，否则我们基本无须关心目前结果在内存中是否已经计算好了。只要使用飞桨提供的运算符，飞桨将默认使用异步计算来获取高计算性能。\n",
    "\n",
    "\n",
    "## 用同步函数让前端等待计算结果\n",
    "\n",
    "飞桨的异步计算使用GPU CUDA设备本身的异步计算特性，CUDA计算是先拉起kernel将任务给GPU执行，然后CPU会继续执行后面的工作，调用wait_to_read函数、waitall函数、asnumpy函数、asscalar等函数，会触发CUDA的sync（同步）操作，从而等待结果算完再继续后面的工作（但对于CPU来讲，是没有异步操作的，语句执行完就是计算完了）。飞桨里面并没有把CUDA里的wait_to_read函数、waitall函数、asscalar等函数暴露出来。除了刚刚介绍的`print`函数外，飞桨动态图里的Tensor.numpy()可以实现asnumpy函数的效果，让前端线程等待后端的计算结果完成。\n",
    "\n",
    "下面是使用Tensor.numpy()的例子。输出用时包含了变量`y`的计算时间。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Workloads are finished. time: 0.0024 sec\n"
     ]
    }
   ],
   "source": [
    "with Benchmark('Workloads are finished.'):\n",
    "    y = paddle.dot(x, x)\n",
    "    y.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "拆分成2个cell分别执行，可以看到每个cell的Benchmark统计运行时间都很短："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time: 0.0006 sec\n"
     ]
    }
   ],
   "source": [
    "with Benchmark():\n",
    "    y = paddle.dot(x, x)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time: 0.0002 sec\n"
     ]
    }
   ],
   "source": [
    "with Benchmark():\n",
    "    y.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "numpy函数和print函数会触发让前端等待后端计算结果的行为。这类函数通常称为同步函数。\n",
    "## 使用异步计算提升计算性能\n",
    "\n",
    "在下面的例子中，我们用`for`循环不断对变量`y`赋值。当在`for`循环内使用同步函数`Tensor.numpy()`时，每次赋值不使用异步计算；当在`for`循环外使用同步函数`Tensor.numpy()`时，则使用异步计算。(两个计算单元的计算量并不一致，这是一个不精确的比较方法）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "synchronous. time: 0.4832 sec\n",
      "asynchronous. time: 0.4703 sec\n"
     ]
    }
   ],
   "source": [
    "y1 = paddle.dot(x, x) \n",
    "with Benchmark('synchronous.'):\n",
    "    for _ in range(200):    \n",
    "        y = paddle.dot(x, x) \n",
    "        y.numpy()\n",
    "    y.numpy()\n",
    "\n",
    "with Benchmark('asynchronous.'):\n",
    "    for _ in range(200): \n",
    "        y = paddle.dot(x, x) \n",
    "        # y1.numpy\n",
    "    y.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "我们观察到，使用异步计算能提升一定的计算性能。为了解释这一现象，让我们对Python前端线程和C++后端线程的交互稍作简化。在每一次循环中，前端和后端的交互大约可以分为3个阶段：\n",
    "\n",
    "1. 前端令后端将计算任务`paddle.dot(x, x) `放进队列；\n",
    "1. 后端从队列中获取计算任务并执行真正的计算；\n",
    "1. 后端将计算结果返回给前端。\n",
    "\n",
    "我们将这3个阶段的耗时分别设为$t_1, t_2, t_3$。如果不使用异步计算，执行1000次计算的总耗时大约为$1000 (t_1+ t_2 + t_3)$；如果使用异步计算，由于每次循环中前端都无须等待后端返回计算结果，执行1000次计算的总耗时可以降为$t_1 + 1000 t_2 + t_3$（假设$1000t_2 > 999t_1$）。\n",
    "\n",
    "## 异步计算对内存的影响\n",
    "\n",
    "为了解释异步计算对内存使用的影响，让我们先回忆一下前面章节的内容。在前面章节中实现的模型训练过程中，我们通常会在每个小批量上评测一下模型，如模型的损失或者精度。细心的读者也许已经发现了，这类评测常用到同步函数，如`asscalar`函数或者`asnumpy`函数。如果去掉这些同步函数，前端会将大量的小批量计算任务在极短的时间内丢给后端，从而可能导致占用更多内存。当我们在每个小批量上都使用同步函数时，前端在每次迭代时仅会将一个小批量的任务丢给后端执行计算，并通常会减小内存占用。\n",
    "\n",
    "由于深度学习模型通常比较大，如果内存资源有限，建议大家在训练模型时对每个小批量都使用同步函数，例如，用`Tensor.numpy()`函数评价模型的表现。类似地，在使用模型预测时，为了减小内存的占用，也可以对每个小批量预测时都使用同步函数，例如，直接打印出当前小批量的预测结果。当然在速度优先的环境下，不建议使用同步函数，以加快运行速度。\n",
    "\n",
    "下面我们来演示异步计算对内存的影响。我们先看下定义的辅助函数get_mem()，用它来监测内存的使用。需要注意的是，这个函数只能在Linux或macOS上运行："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def get_mem():\n",
    "    res = subprocess.check_output(['ps', 'u', '-p', str(os.getpid())])\n",
    "    return int(str(res).split()[15]) / 1e3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "现在让我们写一个经典的手写数字识别程序来做测试，用来测试同步函数和异步函数对内存的影响，顺便也对比下运行时间的不同。因为飞桨只有`Tensor.numpy()`函数和print函数来进行同步，所以我们这里使用`Tensor.numpy()`进行同步操作，　对比分别采用1个epoch同步一次和10个epoch同步一次所使用的内存和运行时间（后面我们称1个epoch同步一次为同步，10个epoch同步一次为异步）。\n",
    "\n",
    "首先运行下面的代码库，它将生成测试文件paddlemem.py："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting paddlemem.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile paddlemem.py\n",
    "import paddle \n",
    "import os\n",
    "import sys\n",
    "import subprocess\n",
    "import time\n",
    "import paddle.nn as nn\n",
    "\n",
    "args = sys.argv\n",
    "\n",
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "lenet = paddle.vision.models.LeNet()\n",
    "\n",
    "# 加载训练集 batch_size 设为 64\n",
    "train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True)\n",
    "\n",
    "# 监测内存\n",
    "def get_mem():\n",
    "    res = subprocess.check_output(['ps', 'u', '-p', str(os.getpid())])\n",
    "    return int(str(res).split()[15]) / 1e3\n",
    "\n",
    "def train():\n",
    "    epochs = 1\n",
    "    loss_tmp = paddle.to_tensor([0.2680041]) \n",
    "    adam = paddle.optimizer.Adam(learning_rate=0.001, parameters=lenet.parameters())\n",
    "    # 用Adam作为优化函数\n",
    "    dctime = time.time()\n",
    "    for epoch in range(epochs):\n",
    "        for batch_id, data in enumerate(train_loader()):\n",
    "            x_data, y_data = data\n",
    "            predicts = lenet(x_data)\n",
    "            loss = paddle.nn.functional.cross_entropy(predicts, y_data, reduction='mean')\n",
    "            acc = paddle.metric.accuracy(predicts, y_data, k=1)\n",
    "            avg_acc = paddle.mean(acc)\n",
    "            loss.backward()\n",
    "            if args[1] == 'sync' or batch_id % 10 == 0 :\n",
    "                tmp = loss.numpy() # 同步函数\n",
    " \n",
    "\n",
    "            if batch_id % 400 == 0:\n",
    "                # print(loss.numpy())\n",
    "                print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, loss.numpy(), avg_acc.numpy()))\n",
    "            adam.step()\n",
    "            adam.clear_grad()\n",
    "    endtime = time.time()-dctime\n",
    "    print(\"time\", endtime)\n",
    "# 启动train多进程训练，默认使用所有可见的GPU卡\n",
    "# import paddle.distributed as dist\n",
    "if __name__ == '__main__':\n",
    "    mem =  get_mem()\n",
    "    \n",
    "    train()\n",
    "    print('increased memory: %f MB' % (get_mem() - mem))\n",
    "    print(get_mem(), mem )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1130 08:53:41.236024  1396 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1130 08:53:41.240175  1396 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "epoch: 0, batch_id: 0, loss is: [56.642303], acc is: [0.0625]\n",
      "epoch: 0, batch_id: 400, loss is: [0.26703086], acc is: [0.90625]\n",
      "epoch: 0, batch_id: 800, loss is: [0.05035423], acc is: [0.96875]\n",
      "time 3.7444908618927\n",
      "increased memory: 233.288000 MB\n",
      "3364.436 3131.144\n"
     ]
    }
   ],
   "source": [
    "!python paddlemem.py sync\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.W1130 08:53:55.386983  1462 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W1130 08:53:55.391258  1462 device_context.cc:346] device: 0, cuDNN Version: 7.6.\n",
      "epoch: 0, batch_id: 0, loss is: [53.559128], acc is: [0.09375]\n",
      "epoch: 0, batch_id: 400, loss is: [0.87221307], acc is: [0.890625]\n",
      "epoch: 0, batch_id: 800, loss is: [0.16656862], acc is: [0.953125]\n",
      "time 3.700477123260498\n",
      "increased memory: 233.380000 MB\n",
      "3369.284 3135.9\n"
     ]
    }
   ],
   "source": [
    "!python paddlemem.py async"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "我们发现同步下面内存占用233.288 MB 训练用时3.74秒\n",
      "\n",
      "我们发现异步下面内存占用233.38 MB 训练用时3.7秒\n",
      "\n",
      "可以发现有微小的差别，那就是同步下面内存占用要比异步少0.092 MB，时间多0.044秒\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "asyncmem, asynctime = 233.380000, 3.700477123260498\n",
    "syncmem, synctime = 233.288000, 3.7444908618927\n",
    "print(f\"\"\"\n",
    "我们发现同步下面内存占用{syncmem} MB 训练用时{synctime:0.3}秒\n",
    "\n",
    "我们发现异步下面内存占用{asyncmem} MB 训练用时{asynctime:0.3}秒\n",
    "\n",
    "可以发现有微小的差别，那就是同步下面内存占用要比异步少{asyncmem - syncmem :0.3} MB，时间多{synctime - asynctime:0.3}秒\n",
    "\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "多做几次实验，发现尽管差别很小，但都验证了同步要比异步速度慢，占用内存要比异步少这种情况(当然每10个batch时间里同步比异步多了9次`tmp = loss.numpy()`的计算，这要多耗费一点点时间，所以这也是不精确的比较)。\n",
    "\n",
    "当然可以看到这个差异非常小，一方面可能是实验程序比较简陋不太精确，另一方面也证明飞桨的代码优化非常棒，同步和异步队列处理的效率都非常高！"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 飞桨异步数据加载"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "当训练时使用的数据集数据量较大或者预处理逻辑复杂时，如果串行地进行数据读取，数据读取往往会成为训练效率的瓶颈。这种情况下通常需要利用多线程或者多进程的方法异步地进行数据载入，从而提高数据读取和整体训练效率。\n",
    "\n",
    "飞桨中推荐使用\n",
    "\n",
    "* DataLoader，灵活的异步加载\n",
    "\n",
    "该API提供了多进程的异步加载支持，也是paddle后续主推的数据读取方式。用户可通过配置num_workers指定异步加载数据的进程数目从而满足不同规模数据集的读取需求。\n",
    "\n",
    "具体使用方法及示例请参考API文档：fluid.io.DataLLoader"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## DataLoader\n",
    "class paddle.fluid.io.DataLoader(dataset, feed_list=None, places=None, return_list=False, batch_sampler=None, batch_size=1, shuffle=False, drop_last=False, collate_fn=None, num_workers=0, use_buffer_reader=True, use_shared_memory=False, timeout=0, worker_init_fn=None)[源代码]\n",
    "\n",
    "* DataLoader返回一个迭代器，该迭代器根据 batch_sampler 给定的顺序迭代一次给定的 dataset\n",
    "\n",
    "* DataLoader支持单进程和多进程的数据加载方式，当 num_workers 大于0时，将使用多进程方式异步加载数据。\n",
    "\n",
    "* DataLoader当前仅支持 map-style 的数据集(可通过下标索引样本)， map-style 的数据集请参考 paddle.io.Dataset 。\n",
    "\n",
    "batch_sampler 请参考 fluid.io.BatchSampler\n",
    "\n",
    "### 参数:\n",
    "dataset (Dataset) - DataLoader从此参数给定数据集中加载数据，此参数必须是 paddle.io.Dataset 或 paddle.io.IterableDataset 的一个子类实例。\n",
    "\n",
    "feed_list (list(Tensor)|tuple(Tensor)) - feed变量列表，由 fluid.layers.data() 创建。当 return_list 为False时，此参数必须设置。默认值为None。\n",
    "\n",
    "places (list(Place)|tuple(Place)) - 数据需要放置到的Place列表。在静态图和动态图模式中，此参数均必须设置。在动态图模式中，此参数列表长度必须是1。默认值为None。\n",
    "\n",
    "return_list (bool) - 每个设备上的数据是否以list形式返回。若return_list = False，每个设备上的返回数据均是str -> Tensor的映射表，其中映射表的key是每个输入变量的名称。若return_list = True，则每个设备上的返回数据均是list(Tensor)。在动态图模式下，此参数必须为True。默认值为False。\n",
    "\n",
    "batch_sampler (BatchSampler) - fluid.io.BatchSampler 或其子类的实例，DataLoader通过 batch_sampler 产生的mini-batch索引列表来 dataset 中索引样本并组成mini-batch。默认值为None。\n",
    "\n",
    "batch_size (int) - 每mini-batch中样本个数，为 batch_sampler 的替代参数，若 batch_sampler 未设置，会根据 batch_size shuffle drop_last 创建一个 fluid.io.BatchSampler 。默认值为1。\n",
    "\n",
    "shuffle (bool) - 生成mini-batch索引列表时是否对索引打乱顺序，为 batch_sampler 的替代参数，若 batch_sampler 未设置，会根据 batch_size shuffle drop_last 创建一个 fluid.io.BatchSampler 。默认值为False。\n",
    "\n",
    "drop_last (bool) - 是否丢弃因数据集样本数不能被 batch_size 整除而产生的最后一个不完整的mini-batch，为 batch_sampler 的替代参数，若 batch_sampler 未设置，会根据 batch_size shuffle drop_last 创建一个 fluid.io.BatchSampler 。默认值为False。\n",
    "\n",
    "collate_fn (callable) - 通过此参数指定如果将样本列表组合为mini-batch数据，当 collate_fn 为None时，默认为将样本个字段在第0维上堆叠(同 np.stack(..., axis=0) )为mini-batch的数据。默认值为None。\n",
    "\n",
    "num_workers (int) - 用于加载数据的子进程个数，若为0即为不开启子进程，在主进程中进行数据加载。默认值为0。\n",
    "\n",
    "use_buffer_reader (bool) - 是否使用缓存读取器 。若 use_buffer_reader 为True，DataLoader会异步地预读取下一个mini-batch的数据，可加速数据读取过程，但同时会占用少量的CPU/GPU存储，即一个batch输入数据的存储空间。默认值为True。\n",
    "\n",
    "use_shared_memory (bool) - 是否使用共享内存来提升子进程将数据放入进程间队列的速度，该参数尽在多进程模式下有效(即 num_workers > 0 )，请确认机器上有足够的共享内存空间(如Linux系统下 /dev/shm/ 目录空间大小)再设置此参数。默认为False。\n",
    "\n",
    "timeout (int) - 从子进程输出队列获取mini-batch数据的超时时间。默认值为0。\n",
    "\n",
    "worker_init_fn (callable) - 子进程初始化函数，此函数会被子进程初始化时被调用，并传递 worker id 作为参数。默认值为None。\n",
    "\n",
    "\n",
    "返回：迭代 dataset 数据的迭代器，迭代器返回的数据中的每个元素都是一个Tensor。\n",
    "\n",
    "返回类型: DataLoader\n",
    "\n",
    "### 代码示例\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0 batch 0: loss = 3.022285223007202\n",
      "Epoch 0 batch 1: loss = 2.986398935317993\n",
      "Epoch 0 batch 2: loss = 2.4183056354522705\n",
      "Epoch 0 batch 3: loss = 2.4088308811187744\n",
      "Epoch 0 batch 4: loss = 2.411302089691162\n",
      "Epoch 0 batch 5: loss = 2.3987526893615723\n",
      "Epoch 0 batch 6: loss = 2.6042914390563965\n",
      "Epoch 0 batch 7: loss = 2.5803699493408203\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "import paddle\n",
    "import paddle.nn as nn\n",
    "import paddle.nn.functional as F\n",
    "from paddle.io import Dataset, BatchSampler, DataLoader\n",
    "\n",
    "BATCH_NUM = 8\n",
    "BATCH_SIZE = 16\n",
    "EPOCH_NUM = 1\n",
    "\n",
    "IMAGE_SIZE = 784\n",
    "CLASS_NUM = 10\n",
    "\n",
    "USE_GPU = False # whether use GPU to run model\n",
    "\n",
    "# define a random dataset\n",
    "class RandomDataset(Dataset):\n",
    "    def __init__(self, num_samples):\n",
    "        self.num_samples = num_samples\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        image = np.random.random([IMAGE_SIZE]).astype('float32')\n",
    "        label = np.random.randint(0, CLASS_NUM - 1, (1, )).astype('int64')\n",
    "        return image, label\n",
    "\n",
    "    def __len__(self):\n",
    "        return self.num_samples\n",
    "\n",
    "dataset = RandomDataset(BATCH_NUM * BATCH_SIZE)\n",
    "\n",
    "class SimpleNet(nn.Layer):\n",
    "    def __init__(self):\n",
    "        super(SimpleNet, self).__init__()\n",
    "        self.fc = nn.Linear(IMAGE_SIZE, CLASS_NUM)\n",
    "\n",
    "    def forward(self, image, label=None):\n",
    "        return self.fc(image)\n",
    "\n",
    "simple_net = SimpleNet()\n",
    "opt = paddle.optimizer.SGD(learning_rate=1e-3,\n",
    "                          parameters=simple_net.parameters())\n",
    "\n",
    "loader = DataLoader(dataset,\n",
    "                    batch_size=BATCH_SIZE,\n",
    "                    shuffle=True,\n",
    "                    drop_last=True,\n",
    "                    num_workers=2)\n",
    "\n",
    "for e in range(EPOCH_NUM):\n",
    "    for i, (image, label) in enumerate(loader()):\n",
    "        out = simple_net(image)\n",
    "        loss = F.cross_entropy(out, label)\n",
    "        avg_loss = paddle.mean(loss)\n",
    "        avg_loss.backward()\n",
    "        opt.minimize(avg_loss)\n",
    "        simple_net.clear_gradients()\n",
    "        print(\"Epoch {} batch {}: loss = {}\".format(e, i, np.mean(loss.numpy())))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 小结\n",
    "\n",
    "* 飞桨包括用户直接用来交互的前端和系统用来执行计算的后端。\n",
    "\n",
    "* 飞桨能够通过异步计算提升计算性能。\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "## 练习\n",
    "\n",
    "* 在“使用异步计算提升计算性能”一节中，我们提到使用异步计算可以使执行1000次计算的总耗时降为$t_1 + 1000 t_2 + t_3$。这里为什么要假设$1000t_2 > 999t_1$？\n",
    "\n",
    "\n",
    "\n",
    "## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1881)\n",
    "\n",
    "![](../img/qr_async-computation.svg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 2.0.0b0 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
