{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 305 Batch Train\n",
    "\n",
    "View more, visit my tutorial page: https://morvanzhou.github.io/tutorials/\n",
    "My Youtube Channel: https://www.youtube.com/user/MorvanZhou\n",
    "\n",
    "Dependencies:\n",
    "* torch: 0.4\n",
    "\n",
    "https://ptorch.com/docs/4/pytorch-video-train-on-batch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch._C.Generator at 0x256c588ebb0>"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "import torch.utils.data as Data\n",
    "\n",
    "torch.manual_seed(1)    # reproducible"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "BATCH_SIZE = 5\n",
    "# BATCH_SIZE = 8 # 第一批次有8个数据，第二批次有2个数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.linspace(1, 10, 10)       # this is x data (torch tensor)\n",
    "y = torch.linspace(10, 1, 10)       # this is y data (torch tensor)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Torch 可以使用 `DataLoader` 来整理数据集，用它来包装自己的数据, 进行批训练。\n",
    "\n",
    "将numpy array或其他数据形式装换成 Tensor, 然后再放进这个包装器中. 使用 DataLoader 可以有效地迭代数据。\n",
    "\n",
    "PyTorch1.0是Data.TensorDataset(x,y)就可以直接有结果（待验证）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "torch_dataset = Data.TensorDataset(x, y)\n",
    "\n",
    "# 使训练集分成多个batch\n",
    "loader = Data.DataLoader(\n",
    "    dataset=torch_dataset,      # torch TensorDataset format\n",
    "    batch_size=BATCH_SIZE,      # mini batch size\n",
    "    shuffle=True,               # random shuffle for training\n",
    "    num_workers=2,              # subprocesses for loading data（加载数据的多线程）\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch:  0 | Step:  0 | batch x:  [ 2.  7. 10.  1.  4.] | batch y:  [ 9.  4.  1. 10.  7.]\n",
      "Epoch:  0 | Step:  1 | batch x:  [3. 5. 6. 8. 9.] | batch y:  [8. 6. 5. 3. 2.]\n",
      "Epoch:  1 | Step:  0 | batch x:  [7. 8. 2. 3. 9.] | batch y:  [4. 3. 9. 8. 2.]\n",
      "Epoch:  1 | Step:  1 | batch x:  [ 5.  1.  6.  4. 10.] | batch y:  [ 6. 10.  5.  7.  1.]\n",
      "Epoch:  2 | Step:  0 | batch x:  [ 8.  1.  4.  9. 10.] | batch y:  [ 3. 10.  7.  2.  1.]\n",
      "Epoch:  2 | Step:  1 | batch x:  [7. 6. 5. 2. 3.] | batch y:  [4. 5. 6. 9. 8.]\n"
     ]
    }
   ],
   "source": [
    "for epoch in range(3):   # train entire dataset 3 times\n",
    "    for step, (batch_x, batch_y) in enumerate(loader):  # for each training step\n",
    "        # train your data...\n",
    "        print('Epoch: ', epoch, '| Step: ', step, '| batch x: ',\n",
    "              batch_x.numpy(), '| batch y: ', batch_y.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Suppose a different batch size that cannot be fully divided by the number of data entreis:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch:  0 | Step:  0 | batch x:  [ 6.  2.  4. 10.  9.  3.  8.  5.] | batch y:  [5. 9. 7. 1. 2. 8. 3. 6.]\n",
      "Epoch:  0 | Step:  1 | batch x:  [7. 1.] | batch y:  [ 4. 10.]\n",
      "Epoch:  1 | Step:  0 | batch x:  [8. 5. 9. 1. 4. 3. 2. 7.] | batch y:  [ 3.  6.  2. 10.  7.  8.  9.  4.]\n",
      "Epoch:  1 | Step:  1 | batch x:  [10.  6.] | batch y:  [1. 5.]\n",
      "Epoch:  2 | Step:  0 | batch x:  [ 4.  1.  5.  7.  3.  6. 10.  2.] | batch y:  [ 7. 10.  6.  4.  8.  5.  1.  9.]\n",
      "Epoch:  2 | Step:  1 | batch x:  [9. 8.] | batch y:  [2. 3.]\n"
     ]
    }
   ],
   "source": [
    "BATCH_SIZE = 8\n",
    "loader = Data.DataLoader(\n",
    "    dataset=torch_dataset,      # torch TensorDataset format\n",
    "    batch_size=BATCH_SIZE,      # mini batch size\n",
    "    shuffle=True,               # random shuffle for training\n",
    "    num_workers=2,              # subprocesses for loading data\n",
    ")\n",
    "for epoch in range(3):   # train entire dataset 3 times\n",
    "    for step, (batch_x, batch_y) in enumerate(loader):  # for each training step\n",
    "        # train your data...\n",
    "        print('Epoch: ', epoch, '| Step: ', step, '| batch x: ',\n",
    "              batch_x.numpy(), '| batch y: ', batch_y.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
