{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.6.9-final"
    },
    "colab": {
      "name": "hw13_meta_omniglot.ipynb",
      "provenance": [],
      "collapsed_sections": [],
      "include_colab_link": true
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/Iallen520/lhy_DL_Hw/blob/master/hw13_meta_omniglot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p3GrThE2-evu",
        "colab_type": "text"
      },
      "source": [
        "# **HW13 Meta Learning: Omniglot Few-Shot Classification**\n",
        "目標：reproduce Finn 的實驗結果\n",
        "\n",
        "手把手影片：https://drive.google.com/open?id=1DjwXTpEVK__f5dmlkU4kUgaaTmFtHfIw\n",
        "\n",
        "手把手投影片：https://drive.google.com/open?id=1FUVULNb8LwTt8Ixs3vra6poGeRcOap4n\n",
        "\n",
        "#####**在你看這裡時，請你先執行 step 1 下載資料第一個、第二個 block 的程式碼，再繼續往下看。**\n",
        "Reference: \n",
        "1. A repo containing lots of few-shot learning models: https://github.com/oscarknagg/few-shot\n",
        "2. A pytorch implementation: https://github.com/dragen1860/MAML-Pytorch\n",
        "3. The official Tf implementation: https://github.com/cbfinn/maml\n",
        "4. Omniglot dataset: https://github.com/brendenlake/omniglot\n",
        "\n",
        "若有任何問題，歡迎來信至助教信箱 ntu-ml-2020spring-ta@googlegroups.com\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s_0JhLS-_UXi",
        "colab_type": "text"
      },
      "source": [
        "我們看一下 Omniglot 的 dataset 長什麼樣子\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FuaKrzxU_a2P",
        "colab_type": "code",
        "outputId": "e0ccf0a3-c7d7-4887-9c26-058153295f55",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 297
        }
      },
      "source": [
        "from PIL import Image\n",
        "from IPython.display import display\n",
        "for i in range(10, 20):\n",
        "  im = Image.open(\"Omniglot/images_background/Japanese_(hiragana).0/character13/0500_\" + str (i) + \".png\")\n",
        "  display(im)"
      ],
      "execution_count": 3,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8CC0>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABEUlEQVR4nGP8z4AbMOGRQ5F8uekLmux/BNjHs+0/CkDWqcKJppMFQv1jYGBgYGb69w/FMsb/DAwMDD+bnjMwMHxbb6nIyMDAwMBWqoyk8/+zRwwMDD//vGFmYGBgYBDhQNbJ8Pvvp7t/OL0nBENMZUG1s2vFsz+C71nYsPnz5QSz/Vt1f//DGgj/GV0MbEMYTqDJQrz7Sc/62VVZZtefKIEAlfy3XthM3TBD7Qu2EGL07ZPWncHGzog9bP/+/v1EvvUv9rBlYvk37VsgE1ad/34+zeeL+4UaKywMDAwM7w7/unH46puSKlZUjYz/GRj+z278y2xkbW7Cy4ApyfD1838mQVY0lzLAAx47IDqBDQpJAN4Euv7fFejQAAAAAElFTkSuQmCC\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8C88>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABNklEQVR4nGP8zwAH//8xM6AAJiT2pdTXuCVfbvmGW5LhPwNuSUaGP7gl5ZkuoUqyMDAwMLw78I9PjVVYSu2AP9P//39ZUSQfFP/8/4dVWvER9y6GC08+T+dCltQ9zvD5wZPr9/7uPsMkzuXKgnAhHPz71aJ07eHHH3/gIghVDIwsEv9ERHF6RevDG9yBwMn8GZvk/2+3nvxnEOe4g+rR////////scmBX+nov+OCV/4jA4jkNR73ed4aD3M032GRvME/5ddV4QKRqX+xSH4vFM4/4cmg+eQ/Fsn/X6doiPDy7vmHVfL/3xdzjB2+o8r9h/mTSTxBn4UJ1SNIgfD1iTS6JDzgfxSJ7UEzFWbnr5fZQrP+YJf8FKcpsegHuhw0yti02QI9MGxkYPwPtZmREUMOJokdAAB60yoWf/hgewAAAABJRU5ErkJggg==\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8CF8>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABN0lEQVR4nGP8z4AbsCBz/j9lkGZE4jMi6/wTz7AQWTWKToafqMYy4bESXfI/bklG8Yc/cEoyGbz59R8OIA76eBNq2v/7P/c8gJnM6KHHwsDAcC7iN8y133KEYR7lMmdg/M/A8PUxTPWFtCXOMElGTkYWBgYGbg24rawSPEhuQATC/6dPGe7/Q/EKQvJa0GuGH39RPQpz+FNL0zNXW9gyv/1H8gyU/tuicOXf/6tcEs+RJGGB8HOnrwYjA1r4wSQ/3tP5/vXLpV+i7EiSsPh85/JEmJHh/eelfoyYkv8f7nvNwHDy7HkhbK79/+/fv19JFl/+Y3EQAwMjI+PrA36cWP35////m55y15E1/keSfOurdvAvdskfm/3kdv37j1Xy33Jh7ZWo+v7/h6fbV49UedGTIiO+7AAAZ4kCU7KEzEEAAAAASUVORK5CYII=\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8C88>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABLUlEQVR4nGP8z4AbMKHx/3/7g1vyredeBIcFTfLP7c8w5r+/EMlX90UEISLv/315B5VbepDxPwMDA8OiAmYuiNDfZ0LcUNs/BkEkPzw4D3XHpyZHL0YIU9ydEeYVKP3dW3g51B2MCNcyQgCn47VfUCamVxjY/uH2JwpAlUQLS+RAeH3rnLbIN+ySn6JPsP35/18Kq7Hvz/edPhr75f9/bDoZGLgUGJveb32hgkUnn0H3ob/8vJ8PILT+R4BLMqIXDssyhPyGCTAiuf7fwQiuj9o/5FbC7EK2k8l+2RnFe20WjNiM/f///7+frp6v4Tz04HvyUlaIAbvOf1eNJQ8juKiSXy2lt/7FIfl7Dl/rn//YJf+tFrR7jKwY2Z8MZw/5qDAi8VEkGf4jSzEwAABSseqGZyInRAAAAABJRU5ErkJggg==\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8CF8>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABJklEQVR4nGP8z4AbMKFy/+GRfF54D7fkh8Wv8RgLAxCXsGCR+Pfx0BtTfZjkz5dQNz/79+Lhv32HTj4TbtFjZGBg/M/AwHDO/xdE8s87Qdb/v3WsnYz5WBmgkh9PQ73wqLBVg0FAhwPmkv/I4JrgCWQukoN+/2NECy645P9bTXf4PVFDCG7sU33JNFMOwZvIxsIk/03k2/PnvpTsW2RJeAh9lzRjFtdBNRUuyfjzCwOHP2powhzEaPBypvvlvWjOhZn/xlZYUIgH1U641/5/e/6bufT8BSFs/mTkVmH4+gKHgxgYGBh+PmdhxCn5/a85D1YH/f///0Mk336UeEBI/ntXzdv9C7vkvyOWvJnf/2OX/GxlMBNNDuHPvxdlRFGcygBNJrgAAEPeDmCQZ6aqAAAAAElFTkSuQmCC\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8CC0>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABOElEQVR4nGP8z4AbMKHxXz1AVv0fBfxJt/+B4KHp/P/j7U+cxrKEvHyD2041ppP/cUrKqh34D3EKAwMDC5okq9Sdrz9/3v50ysqTEUny//9/v169/K+01e/Rjz8MUppQnf8YGBi+3bxw+fGLu78Y/nxlCjHXZRfiYmRgYPzP8KfzPsO/iw+4FUUEHAyZX0R0xcIcwsLA8P/lQwYmTzsDXlbGX6f/fGf8zYgcQr9//fr19/////9fRgsKczLO/occQiysrKxMDAwM/5fumr0vmEUcrhPZK38OmvowCkiaYQ0EJoWbuyvmWggjRJDj5IkrPw9D8V84Hyb5bsO3////v9sQJHbxP4bkWYmajXcfnM4QWPQHU/JXjbSYqKiwRj9SXP9nhEXQz/cfH/3n0BJCdiEjKQlscEsCAN5i3onYmdekAAAAAElFTkSuQmCC\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8CF8>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABO0lEQVR4nGP8z4AbMOGRo5bks4v/UCRZkNj/ly06zs3w4+eLf4yK7OiSDH9+/Xl+ZObDNwxcG00Qkk+v/WdgYPh/+1XWic9ySSa87DpIxu4v/sfAwMDw69OtpFAxfkaYSYz/GRgYGL68ZWBgYGA4H39IDy4D18nDw8DAwMDwnIkNWY6CQIB55T+2CIBI/j1+8B4DA5P0P2ySF4NYVVkYPi1m/IEq+///////Fwge+/Hz5/sipvif/5EARPKW2Mx/////P8wi+w5ZEmKsrF4vhx3jh/6/aO6CqLkfxS0iIiQbJvsWWScjVOnnM78ZWNTW95wXwuJPXkcGBoY/x5S5cIbQlYOubJh2/v318+fPn7ctLd78x3Dt//Uz/zEwMDxmXymMGUKMCop/GBgY1BI0UAMI6tp/mA5ASGIHADm3qpNJq4xdAAAAAElFTkSuQmCC\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8DA0>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABI0lEQVR4nGP8z4AbMKFy7yz/jsz9jwz+5su//P///79///7+/v//Pwuqzi9KfP9/Xdv1juEXZxMLA5Lk/39/3r153Mdw9BS/CANjNBMDAyPEQf8ffv92/vizq8+4pBlVg12FGBhYmeB2/nARFJSxTpokWfXpy89/MCdAjWWd8IVRjo/9+3Q+HkaERVBJJm0Ghp9ff6B5GuGg36WbGNXe4AiEd5tMC869xqHzwPcq7XuTsev8/5lTnlWPEUUSKRB+3+L/z4VdklH8qyfj/x84dHrs/8XwOxGHJKshA8NXjof/mLF5hYGBgYFD/84/bK5lYPj///+PxziMfbTg6/+nZ9KYsEo+2PWHgasoE8lKWHwyMDD8/8XAwMiKEgqMJKS+gZcEAF56gf6wykc6AAAAAElFTkSuQmCC\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8CC0>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABXklEQVR4nGP8z4AbMOGRI1Xy9z/ckp+TVv3HKfnn6Lq/MDYLuiSHMkTjv98MzFDJv/9ZGBj+//339ef1symvmN49/bnn1H9rxv8MDAwM/yecjhFmuLPl7bMPv98IszH8ZmCUtuW0h+rkvhT3n4FVV95M/1tpgBsDpxazIDcjIyPUhq/P/zJwSLIxMr4z9u1nRnUQEy8vAwMDw3+G/yw8H+GOQ3bt/w9bnrOZfTZjwiL5/0LaPd63PJ/YsYXQ6wSezacOW/6+jYio/zDwb7Hkuf///5/ij/sDE0Lo/H9IXYOB4deET9+whS3vrbmvfx3axmTJjGns/6dJMpp+4gxid+EiSJL/f98sc7Ph6PmDVfL//++zZWM+/8cu+TZfOOrdf2yS/15u8BCpe/8fm+TXfm1Bj4O//2OT/Dtf0O/g1///sUreMF707T86gEp+S1/2B0PuPzSZbPscgpHUGBgAt9BS1wiwXusAAAAASUVORK5CYII=\n"
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F72F57E8DA0>",
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABQElEQVR4nGP8z4AbMKHwfvxA4TIi6/xbyNDPjMRnQVb5/wkDii2oxuK1Ew2gGMvwn5EBZjAjTPL3W4jAp6uGT86f+MfAwMDAnKwKde25oF8MDAwMDP9e8nD/kWNhYGBg4JymDZV8d+AvAwMDA8OX0mBXDSWIZ9gYGRgY/kPBv68Hdl6SmfXvPxKAOuj/g92rzzEwfcDmlf97nKs4Vu3y/IPml////////9VI5fL3//+v8KMaC9HJbs8kysHAIMT9D4vO/9eEJ/z59y6HMfEPFgcp+7XfF9tyjfMDSsBDJdl6eXf+F1s1GdVUeHz++cnAxOAlvAI5sOGxwsLNzfn9njlyXKNF2X8BLIGAAyBZ8f/zge9oamF++nG/S59L9RayN//DJP8tlpTL3XwbJfT+w73y7KShLDOqoajpFh3gdS0Aq5C/ToYG3GgAAAAASUVORK5CYII=\n"
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XP3HOga_NIzS",
        "colab_type": "text"
      },
      "source": [
        "## **Step 2: 建立模型**\n",
        "以下我們就要開始建立核心的 MAML 模型\n",
        "首先我們將需要的套件引入"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pTaZoyuwNIzU",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import DataLoader, Dataset\n",
        "import torchvision.transforms as transforms\n",
        "import glob\n",
        "from tqdm import tqdm\n",
        "import numpy as np\n",
        "from collections import OrderedDict"
      ],
      "execution_count": 4,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7PK_iXl9NIzY",
        "colab_type": "text"
      },
      "source": [
        "接著我們要建立一個 nn.Module，作為 omniglot 的分類器(Classifier)\n",
        "我們使用的是 CNN-based 的分類器。\n",
        "以下是 MAML 的演算法:\n",
        "![image.png]()\n",
        "\n",
        "由於在第10行，我們是要對原本的參數 θ 微分，並非 inner-loop (Line5~8) 的 θ' 微分，因此在 inner-loop，我們需要用 functional forward 的方式算出 input image 的 output logits，而不是直接用 nn.module 裡面的 forward（直接對 θ 微分）。在下面我們分別定義了 functional forward 以及 forward 函數。\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mTnHAW15NIzZ",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def ConvBlock(in_ch, out_ch):\n",
        "  return nn.Sequential(nn.Conv2d(in_ch, out_ch, 3, padding = 1),\n",
        "                       nn.BatchNorm2d(out_ch),\n",
        "                       nn.ReLU(),\n",
        "                       nn.MaxPool2d(kernel_size = 2, stride = 2)) # 原作者在 paper 裡是說她在 omniglot 用的是 strided convolution\n",
        "                                                                  # 不過這裡我改成 max pool (mini imagenet 才是 max pool)\n",
        "                                                                  # 這並不是你們在 report 第三題要找的 tip\n",
        "def ConvBlockFunction(x, w, b, w_bn, b_bn):\n",
        "  x = F.conv2d(x, w, b, padding = 1)\n",
        "  x = F.batch_norm(x, running_mean = None, running_var = None, weight = w_bn, bias = b_bn, training = True)\n",
        "  x = F.relu(x)\n",
        "  x = F.max_pool2d(x, kernel_size = 2, stride = 2)\n",
        "  return x\n",
        "\n",
        "class Classifier(nn.Module):\n",
        "  def __init__(self, in_ch, k_way):\n",
        "    super(Classifier, self).__init__()\n",
        "    self.conv1 = ConvBlock(in_ch, 64)\n",
        "    self.conv2 = ConvBlock(64, 64)\n",
        "    self.conv3 = ConvBlock(64, 64)\n",
        "    self.conv4 = ConvBlock(64, 64)\n",
        "    self.logits = nn.Linear(64, k_way)\n",
        "    \n",
        "  def forward(self, x):\n",
        "    x = self.conv1(x)\n",
        "    x = self.conv2(x)\n",
        "    x = self.conv3(x)\n",
        "    x = self.conv4(x)\n",
        "    x = nn.Flatten(x)\n",
        "    x = self.logits(x)\n",
        "    return x\n",
        "\n",
        "  def functional_forward(self, x, params):\n",
        "    '''\n",
        "    Arguments:\n",
        "    x: input images [batch, 1, 28, 28]\n",
        "    params: 模型的參數，也就是 convolution 的 weight 跟 bias，以及 batchnormalization 的  weight 跟 bias\n",
        "            這是一個 OrderedDict\n",
        "    '''\n",
        "    for block in [1, 2, 3, 4]:\n",
        "      x = ConvBlockFunction(x, params[f'conv{block}.0.weight'], params[f'conv{block}.0.bias'],\n",
        "                            params.get(f'conv{block}.1.weight'), params.get(f'conv{block}.1.bias'))\n",
        "    x = x.view(x.shape[0], -1)\n",
        "    x = F.linear(x, params['logits.weight'] , params['logits.bias'])\n",
        "    return x\n"
      ],
      "execution_count": 5,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RF1OTFAUNIzc",
        "colab_type": "text"
      },
      "source": [
        "這個函數是用來產生 label 的。在 n_way, k_shot 的 few-shot classification 問題中，每個 task 會有 n_way 個類別，每個類別k_shot張圖片。這是產生一個 n_way, k_shot 分類問題的 label 的函數\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0btLChelNIzd",
        "colab_type": "code",
        "outputId": "deb85d19-561c-4b94-a5d4-9a9a9978daea",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "def create_label(n_way, k_shot):\n",
        "  return torch.arange(n_way).repeat_interleave(k_shot).long()\n",
        "  \n",
        "# 我們試著產生 5 way 2 shot 的 label 看看\n",
        "create_label(5, 2)"
      ],
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": "tensor([0, 0, 1, 1, 2, 2, 3, 3, 4, 4])"
          },
          "metadata": {},
          "execution_count": 6
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sahQn8HtNIzh",
        "colab_type": "text"
      },
      "source": [
        "接下來這裡是 MAML 的核心。演算法就跟原文完全一樣，這個函數做的事情就是用 \"一個 meta-batch的 data\" 更新參數。這裡助教實作的是二階MAML(inner_train_step = 1)，對應老師投影片 meta learning p.13~p.18。如果要找一階的數學推導，在老師投影片 p.25。\n",
        "\n",
        "\n",
        "(http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2019/Lecture/Meta1%20(v6).pdf)\n",
        "\n",
        "以下詳細解釋：\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "m71AX5z6NIzh",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def MAML(model, optimizer, x, n_way, k_shot, q_query, loss_fn, inner_train_step = 1, inner_lr = 0.4, train = True):\n",
        "  \"\"\"\n",
        "  Args:\n",
        "  x is the input omniglot images for a meta_step, shape = [batch_size, n_way * (k_shot + q_query), 1, 28, 28]\n",
        "  n_way: 每個分類的 task 要有幾個 class\n",
        "  k_shot: 每個類別在 training 的時候會有多少張照片\n",
        "  q_query: 在 testing 時，每個類別會用多少張照片 update\n",
        "  \"\"\"\n",
        "  criterion = loss_fn\n",
        "  task_loss = [] # 這裡面之後會放入每個 task 的 loss \n",
        "  task_acc = []  # 這裡面之後會放入每個 task 的 loss \n",
        "  for meta_batch in x:\n",
        "    train_set = meta_batch[:n_way*k_shot] # train_set 是我們拿來 update inner loop 參數的 data\n",
        "    val_set = meta_batch[n_way*k_shot:]   # val_set 是我們拿來 update outer loop 參數的 data\n",
        "    \n",
        "    fast_weights = OrderedDict(model.named_parameters()) # 在 inner loop update 參數時，我們不能動到實際參數，因此用 fast_weights 來儲存新的參數 θ'\n",
        "    \n",
        "    for inner_step in range(inner_train_steps): # 這個 for loop 是 Algorithm2 的 line 7~8\n",
        "                                                # 實際上我們 inner loop 只有 update 一次 gradients，不過某些 task 可能會需要多次 update inner loop 的 θ'，\n",
        "                                                # 所以我們還是用 for loop 來寫\n",
        "      train_label = create_label(n_way, k_shot).cuda()\n",
        "      logits = model.functional_forward(train_set, fast_weights)\n",
        "      loss = criterion(logits, train_label)\n",
        "      grads = torch.autograd.grad(loss, fast_weights.values(), create_graph = True) # 這裡是要計算出 loss 對 θ 的微分 (∇loss)    \n",
        "      fast_weights = OrderedDict((name, param - inner_lr * grad)\n",
        "                                  for ((name, param), grad) in zip(fast_weights.items(), grads)) # 這裡是用剛剛算出的 ∇loss 來 update θ 變成 θ'\n",
        "  \n",
        "    val_label = create_label(n_way, q_query).cuda()\n",
        "    logits = model.functional_forward(val_set, fast_weights) # 這裡用 val_set 和 θ' 算 logit\n",
        "    loss = criterion(logits, val_label)                      # 這裡用 val_set 和 θ' 算 loss\n",
        "    task_loss.append(loss)                                   # 把這個 task 的 loss 丟進 task_loss 裡面\n",
        "    acc = np.asarray([torch.argmax(logits, -1).cpu().numpy() == val_label.cpu().numpy()]).mean() # 算 accuracy\n",
        "    task_acc.append(acc)\n",
        "    \n",
        "  model.train()\n",
        "  optimizer.zero_grad()\n",
        "  meta_batch_loss = torch.stack(task_loss).mean() # 我們要用一整個 batch 的 loss 來 update θ (不是 θ')\n",
        "  if train:\n",
        "    meta_batch_loss.backward()\n",
        "    optimizer.step()\n",
        "  task_acc = np.mean(task_acc)\n",
        "  return meta_batch_loss, task_acc"
      ],
      "execution_count": 7,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OQ2NQoG7NIzj",
        "colab_type": "text"
      },
      "source": [
        "定義 dataset。這個 dataset 會回傳某個 character 的 image，總共會有 k_shot+q_query 張，所以回傳的 tensor 大小是 [k_shot+q_query, 1, 28, 28]"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dm-4fRguNIzk",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "class Omniglot(Dataset):\n",
        "  def __init__(self, data_dir, k_way, q_query):\n",
        "    self.file_list = [f for f in glob.glob(data_dir + \"**/character*\", recursive=True)]\n",
        "    self.transform = transforms.Compose([transforms.ToTensor()])\n",
        "    self.n = k_way + q_query\n",
        "\n",
        "  def __getitem__(self, idx):\n",
        "    sample = np.arange(20)\n",
        "    np.random.shuffle(sample) # 這裡是為了等一下要 random sample 出我們要的 character\n",
        "    img_path = self.file_list[idx]\n",
        "    img_list = [f for f in glob.glob(img_path + \"**/*.png\", recursive=True)]\n",
        "    img_list.sort()\n",
        "    imgs = [self.transform(Image.open(img_file)) for img_file in img_list]\n",
        "    imgs = torch.stack(imgs)[sample[:self.n]] # 每個 character，取出 k_way + q_query 個\n",
        "    return imgs\n",
        "    \n",
        "  def __len__(self):\n",
        "    return len(self.file_list)    "
      ],
      "execution_count": 8,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "21jpiW6ONIzm",
        "colab_type": "text"
      },
      "source": [
        "## **Step 3: 開始訓練**\n",
        "定義 hyperparameter"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GwocHCyHNIzm",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "n_way = 5\n",
        "k_shot = 1\n",
        "q_query = 1\n",
        "inner_train_steps = 1\n",
        "inner_lr = 0.4\n",
        "meta_lr = 0.001\n",
        "meta_batch_size = 32\n",
        "max_epoch = 40\n",
        "eval_batches = test_batches = 20\n",
        "train_data_path = './Omniglot/images_background/'\n",
        "test_data_path = './Omniglot/images_evaluation/'"
      ],
      "execution_count": 9,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8Fub59oyNIzp",
        "colab_type": "text"
      },
      "source": [
        "初始化 dataloader"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1--S_YWANIzp",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "dataset = Omniglot(train_data_path, k_shot, q_query)\n",
        "train_set, val_set = torch.utils.data.random_split(Omniglot(train_data_path, k_shot, q_query), [3200,656])\n",
        "train_loader = DataLoader(train_set,\n",
        "                          batch_size = n_way, # 這裡的 batch size 並不是 meta batch size, 而是一個 task裡面會有多少不同的\n",
        "                                              # characters，也就是 few-shot classifiecation 的 n_way\n",
        "                          num_workers = 8,\n",
        "                          shuffle = True,\n",
        "                          drop_last = True)\n",
        "val_loader = DataLoader(val_set,\n",
        "                          batch_size = n_way,\n",
        "                          num_workers = 8,\n",
        "                          shuffle = True,\n",
        "                          drop_last = True)\n",
        "test_loader = DataLoader(Omniglot(test_data_path, k_shot, q_query),\n",
        "                          batch_size = n_way,\n",
        "                          num_workers = 8,\n",
        "                          shuffle = True,\n",
        "                          drop_last = True)\n",
        "train_iter = iter(train_loader)\n",
        "val_iter = iter(val_loader)\n",
        "test_iter = iter(test_loader)"
      ],
      "execution_count": 10,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IRS7YOCRNIzr",
        "colab_type": "text"
      },
      "source": [
        "初始化 model 和 optimizer"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Sk9XKcRMNIzr",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "meta_model = Classifier(1, n_way).cuda()\n",
        "optimizer = torch.optim.Adam(meta_model.parameters(), lr = meta_lr)\n",
        "loss_fn = nn.CrossEntropyLoss().cuda()"
      ],
      "execution_count": 11,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "31nt2shvNIzt",
        "colab_type": "text"
      },
      "source": [
        "這是一個用來抓一個 meta-batch 的 data 出來的 function"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XlJQaTh2NIzu",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def get_meta_batch(meta_batch_size, k_shot, q_query, data_loader, iterator):\n",
        "  data = []\n",
        "  for _ in range(meta_batch_size):\n",
        "    try:\n",
        "      task_data = iterator.next()  # 一筆 task_data 就是一個 task 裡面的 data，大小是 [n_way, k_shot+q_query, 1, 28, 28]\n",
        "    except StopIteration:\n",
        "      iterator = iter(data_loader)\n",
        "      task_data = iterator.next()\n",
        "    train_data = task_data[:, :k_shot].reshape(-1, 1, 28, 28)\n",
        "    val_data = task_data[:, k_shot:].reshape(-1, 1, 28, 28)\n",
        "    task_data = torch.cat((train_data, val_data), 0)\n",
        "    data.append(task_data)\n",
        "  return torch.stack(data).cuda(), iterator"
      ],
      "execution_count": 12,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4WRA7zIvNIzv",
        "colab_type": "text"
      },
      "source": [
        "開始 train!!!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NwB0bJbgNIzw",
        "colab_type": "code",
        "outputId": "2a606012-5b47-4a23-c850-95058ff7e9ad",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "tags": []
      },
      "source": [
        "for epoch in range(max_epoch):\n",
        "  print(\"Epoch %d\" %(epoch))\n",
        "  train_meta_loss = []\n",
        "  train_acc = []\n",
        "  for step in tqdm(range(len(train_loader) // (meta_batch_size))): # 這裡的 step 是一次 meta-gradinet update step\n",
        "    x, train_iter = get_meta_batch(meta_batch_size, k_shot, q_query, train_loader, train_iter)\n",
        "    meta_loss, acc = MAML(meta_model, optimizer, x, n_way, k_shot, q_query, loss_fn)\n",
        "    train_meta_loss.append(meta_loss.item())\n",
        "    train_acc.append(acc)\n",
        "  print(\"  Loss    : \", np.mean(train_meta_loss))\n",
        "  print(\"  Accuracy: \", np.mean(train_acc))\n",
        "\n",
        "  # 每個 epoch 結束後，看看 validation accuracy 如何  \n",
        "  # 助教並沒有做 early stopping，同學如果覺得有需要是可以做的 \n",
        "  val_acc = []\n",
        "  for eval_step in tqdm(range(len(val_loader) // (eval_batches))):\n",
        "    x, val_iter = get_meta_batch(eval_batches, k_shot, q_query, val_loader, val_iter)\n",
        "    _, acc = MAML(meta_model, optimizer, x, n_way, k_shot, q_query, loss_fn, inner_train_step = 3, train = False) # testing時，我們更新三次 inner-step\n",
        "    val_acc.append(acc)\n",
        "  print(\"  Validation accuracy: \", np.mean(val_acc))"
      ],
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": "0%|          | 0/20 [00:00<?, ?it/s]Epoch 0\n100%|██████████| 20/20 [00:16<00:00,  1.22it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  1.6174778282642364\n  Accuracy:  0.4443750000000001\n100%|██████████| 6/6 [00:01<00:00,  3.68it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.5566666666666668\nEpoch 1\n100%|██████████| 20/20 [00:15<00:00,  1.33it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  1.1645474672317504\n  Accuracy:  0.5703125\n100%|██████████| 6/6 [00:01<00:00,  3.03it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.615\nEpoch 2\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n 17%|█▋        | 1/6 [00:00<00:00,  6.50it/s]  Loss    :  0.9998931467533112\n  Accuracy:  0.6171875000000001\n100%|██████████| 6/6 [00:01<00:00,  3.05it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.6450000000000001\nEpoch 3\n100%|██████████| 20/20 [00:15<00:00,  1.33it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.8716485649347305\n  Accuracy:  0.6784375\n100%|██████████| 6/6 [00:01<00:00,  3.05it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.6933333333333334\nEpoch 4\n100%|██████████| 20/20 [00:15<00:00,  1.30it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.6939334541559219\n  Accuracy:  0.7659374999999999\n100%|██████████| 6/6 [00:02<00:00,  3.00it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.7650000000000001\nEpoch 5\n100%|██████████| 20/20 [00:15<00:00,  1.29it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.5953611940145492\n  Accuracy:  0.7965625000000001\n100%|██████████| 6/6 [00:02<00:00,  2.90it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.8066666666666666\nEpoch 6\n100%|██████████| 20/20 [00:15<00:00,  1.28it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.5160135820508003\n  Accuracy:  0.819375\n100%|██████████| 6/6 [00:01<00:00,  3.01it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.8516666666666667\nEpoch 7\n100%|██████████| 20/20 [00:16<00:00,  1.23it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.421753092110157\n  Accuracy:  0.8724999999999999\n100%|██████████| 6/6 [00:02<00:00,  2.86it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.8783333333333334\nEpoch 8\n100%|██████████| 20/20 [00:16<00:00,  1.23it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.40024181455373764\n  Accuracy:  0.8709374999999999\n100%|██████████| 6/6 [00:02<00:00,  2.89it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9\nEpoch 9\n100%|██████████| 20/20 [00:15<00:00,  1.26it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.3407336488366127\n  Accuracy:  0.8934374999999999\n100%|██████████| 6/6 [00:02<00:00,  2.97it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.8933333333333334\nEpoch 10\n100%|██████████| 20/20 [00:15<00:00,  1.29it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.33732428699731826\n  Accuracy:  0.8896875\n100%|██████████| 6/6 [00:01<00:00,  3.04it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9116666666666666\nEpoch 11\n100%|██████████| 20/20 [00:15<00:00,  1.29it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.3060285821557045\n  Accuracy:  0.90625\n100%|██████████| 6/6 [00:01<00:00,  4.87it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.89\nEpoch 12\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.2907576456665993\n  Accuracy:  0.9121874999999999\n100%|██████████| 6/6 [00:02<00:00,  2.89it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9066666666666667\nEpoch 13\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.24957531467080116\n  Accuracy:  0.9209375000000002\n100%|██████████| 6/6 [00:01<00:00,  3.02it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9049999999999999\nEpoch 14\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n 17%|█▋        | 1/6 [00:00<00:00,  6.08it/s]  Loss    :  0.23836995735764505\n  Accuracy:  0.9296874999999998\n100%|██████████| 6/6 [00:02<00:00,  2.96it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9133333333333334\nEpoch 15\n100%|██████████| 20/20 [00:15<00:00,  1.29it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.229273971170187\n  Accuracy:  0.9346875000000001\n100%|██████████| 6/6 [00:02<00:00,  2.93it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9266666666666669\nEpoch 16\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.21246950849890708\n  Accuracy:  0.9434374999999999\n100%|██████████| 6/6 [00:01<00:00,  3.05it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9266666666666667\nEpoch 17\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.20955298691987992\n  Accuracy:  0.9356249999999999\n100%|██████████| 6/6 [00:02<00:00,  2.98it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9466666666666667\nEpoch 18\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.2225188732147217\n  Accuracy:  0.9253124999999999\n100%|██████████| 6/6 [00:01<00:00,  3.07it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.93\nEpoch 19\n100%|██████████| 20/20 [00:15<00:00,  1.33it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.20172957926988602\n  Accuracy:  0.9371875000000001\n100%|██████████| 6/6 [00:02<00:00,  2.99it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9416666666666665\nEpoch 20\n100%|██████████| 20/20 [00:15<00:00,  1.33it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.1836485542356968\n  Accuracy:  0.9421874999999998\n100%|██████████| 6/6 [00:01<00:00,  3.06it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9366666666666669\nEpoch 21\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.19438154399394988\n  Accuracy:  0.944375\n100%|██████████| 6/6 [00:02<00:00,  2.97it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9600000000000001\nEpoch 22\n100%|██████████| 20/20 [00:14<00:00,  1.34it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.17363672330975533\n  Accuracy:  0.94625\n100%|██████████| 6/6 [00:01<00:00,  3.01it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9450000000000002\nEpoch 23\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.1706240888684988\n  Accuracy:  0.9515625\n100%|██████████| 6/6 [00:01<00:00,  4.84it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9533333333333335\nEpoch 24\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.16118369363248347\n  Accuracy:  0.9515625000000002\n100%|██████████| 6/6 [00:02<00:00,  2.89it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9516666666666667\nEpoch 25\n100%|██████████| 20/20 [00:15<00:00,  1.31it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.1687396250665188\n  Accuracy:  0.9478125000000001\n100%|██████████| 6/6 [00:02<00:00,  2.89it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9483333333333334\nEpoch 26\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n 17%|█▋        | 1/6 [00:00<00:00,  6.13it/s]  Loss    :  0.15333038978278637\n  Accuracy:  0.9531249999999998\n100%|██████████| 6/6 [00:02<00:00,  2.96it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9533333333333333\nEpoch 27\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.15946027897298337\n  Accuracy:  0.9478124999999998\n100%|██████████| 6/6 [00:01<00:00,  3.03it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.965\nEpoch 28\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.13740293756127359\n  Accuracy:  0.9556250000000001\n100%|██████████| 6/6 [00:02<00:00,  2.94it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9500000000000001\nEpoch 29\n100%|██████████| 20/20 [00:15<00:00,  1.28it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.15527944453060627\n  Accuracy:  0.9515624999999999\n100%|██████████| 6/6 [00:02<00:00,  2.97it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9483333333333333\nEpoch 30\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.15613775812089442\n  Accuracy:  0.953125\n100%|██████████| 6/6 [00:02<00:00,  2.95it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9666666666666667\nEpoch 31\n100%|██████████| 20/20 [00:15<00:00,  1.28it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.14711451418697835\n  Accuracy:  0.95625\n100%|██████████| 6/6 [00:02<00:00,  2.94it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9733333333333335\nEpoch 32\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.1393511299043894\n  Accuracy:  0.9568749999999999\n100%|██████████| 6/6 [00:02<00:00,  2.97it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9416666666666665\nEpoch 33\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.14280220195651055\n  Accuracy:  0.9578124999999998\n100%|██████████| 6/6 [00:01<00:00,  3.00it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9583333333333335\nEpoch 34\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.13199312575161457\n  Accuracy:  0.9568749999999999\n100%|██████████| 6/6 [00:01<00:00,  3.01it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9516666666666667\nEpoch 35\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.13894515447318553\n  Accuracy:  0.9575000000000001\n100%|██████████| 6/6 [00:01<00:00,  4.78it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.96\nEpoch 36\n100%|██████████| 20/20 [00:15<00:00,  1.33it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.13166425563395023\n  Accuracy:  0.9609375\n100%|██████████| 6/6 [00:02<00:00,  2.92it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9616666666666668\nEpoch 37\n100%|██████████| 20/20 [00:14<00:00,  1.33it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.1317472517490387\n  Accuracy:  0.9584375000000002\n100%|██████████| 6/6 [00:02<00:00,  2.95it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.945\nEpoch 38\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n 17%|█▋        | 1/6 [00:00<00:00,  5.39it/s]  Loss    :  0.12321555055677891\n  Accuracy:  0.96125\n100%|██████████| 6/6 [00:01<00:00,  3.06it/s]\n  0%|          | 0/20 [00:00<?, ?it/s]  Validation accuracy:  0.9600000000000001\nEpoch 39\n100%|██████████| 20/20 [00:15<00:00,  1.32it/s]\n  0%|          | 0/6 [00:00<?, ?it/s]  Loss    :  0.12022099792957305\n  Accuracy:  0.9640625\n100%|██████████| 6/6 [00:01<00:00,  3.07it/s]  Validation accuracy:  0.9616666666666668\n\n"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b2NfNv5XUBg8",
        "colab_type": "text"
      },
      "source": [
        "測試訓練結果。這就是report上要回報的 test accuracy。"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DPLJqKHFUA6x",
        "colab_type": "code",
        "outputId": "5a321b17-23ec-4708-9ab0-c41016e72483",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 51
        },
        "tags": []
      },
      "source": [
        "test_acc = []\n",
        "for test_step in tqdm(range(len(test_loader) // (test_batches))):\n",
        "  x, val_iter = get_meta_batch(test_batches, k_shot, q_query, test_loader, test_iter)\n",
        "  _, acc = MAML(meta_model, optimizer, x, n_way, k_shot, q_query, loss_fn, inner_train_step = 3, train = False) # testing時，我們更新三次 inner-step\n",
        "  test_acc.append(acc)\n",
        "print(\"  Testing accuracy: \", np.mean(test_acc))"
      ],
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": "100%|██████████| 26/26 [00:07<00:00,  3.63it/s]  Testing accuracy:  0.9411538461538462\n\n"
        }
      ]
    }
  ]
}