{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "MTL PyTorch S.ipynb",
      "private_outputs": true,
      "provenance": [],
      "collapsed_sections": [],
      "machine_shape": "hm",
      "authorship_tag": "ABX9TyMsBLded8D5SWjj6PH9rlt7",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/Jeremy26/hydranets_course/blob/main/MTL_PyTorch_S.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Welcome to the Multi-Task Learning Workshop!\n",
        "In this workshop, you're going to learn how to build multi-output networks with PyTorch.<p>\n",
        "Something like this image:"
      ],
      "metadata": {
        "id": "twDGnszz2B4j"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![IMG_0814.jpg]()"
      ],
      "metadata": {
        "id": "ehsNufft2Iwi"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "The good thing about Multi-Task Learning is that it's pretty **new**, and we **share weights**. Similarly to when transfer learning helps save time when we train our model, we're going to **save time on training, but also on inference!**. You no longer need to run 3x50 layers to have 3 tasks solved, **you can run one neural network**, and **change the heads**. With that, your network benefit from what the other tasks are learning; **it's like teaching to a class of 3 students who would share what they learn together**, instead of letting them figuring out on their own.<p>\n",
        "\n",
        "Here's how this workshop is going to happen: \n",
        "1.  We're going to **load a multi-task learning dataset** named [UTK Face](https://susanqq.github.io/UTKFace/). It contains 24k images of faces, along with 3 labels: age, gender, and race/ethnicity.\n",
        "2.  We are going to explore the dataset, and we'll realize that we need to solve **3 tasks**: binary classification (gender), multi-class classification (race), and regression (age).\n",
        "3.  With PyTorch, we're going to learn how to **create a DataLoader that returns multiple labels**.\n",
        "4.  To solve the tasks, we'll work with a **pretrained ResNet model**, **behead it**, and **create 3 new heads**.\n",
        "5.  Finally, we'll **train the model** on a training dataset, and **test on the validation dataset**.\n",
        "<p>\n",
        "\n",
        "If you're familiar with PyTorch, this workshop might look simple, but it will be challenging to **get a great accuracy with it**. If you're new to PyTorch, you're gonna love it: We're going to **create everything from scratch**.\n"
      ],
      "metadata": {
        "id": "JiVlhi7w2cwI"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 1 — The UTK Face Dataset\n",
        "You can download the UTK Face Dataset [here](https://susanqq.github.io/UTKFace/), but I uploaded on Think Autonomous' servers. <p>\n",
        "**The following wget command will download our dataset.**"
      ],
      "metadata": {
        "id": "u9cKZUvf2fw2"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![](https://images.squarespace-cdn.com/content/v1/5d6567d1afafe900010b2c70/1567268409336-V7HQTOKOVGT6OYHAER5D/utk-1.jpg)"
      ],
      "metadata": {
        "id": "CjxFI0pu4nwC"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4Gvz3UGl16oW"
      },
      "outputs": [],
      "source": [
        "!wget https://hydranets-data.s3.eu-west-3.amazonaws.com/UTKFace.zip"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!jar xf UTKFace.zip"
      ],
      "metadata": {
        "id": "EsLW_FNb2Cdx"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import cv2\n",
        "import random\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "from PIL import Image\n",
        "from torch.utils.data import Dataset\n",
        "from torchvision import transforms\n",
        "import glob\n",
        "import os"
      ],
      "metadata": {
        "id": "5mKtlxWg2nzO"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 2 — Very fast Data Exploration\n",
        "Based on the dataset, you can see that the entire thing is on the name of the images.<p>\n",
        "For example, the image **UTKFace/100_0_0_20170112213500903.jpg.chip.jpg** can be interpreted as follows:\n",
        "\n",
        "* UTKFace/ is a prefix\n",
        "* **100 is the age**\n",
        "* **0 is the gender** (0: male, 1: female)\n",
        "* **0 is the race** (0:White, 1:Black, 2:Asian, 3:Indian, 4:Other)\n",
        "* The rest is the date and the extension (jpg)\n",
        "\n",
        "So: **[age] _ [gender] _ [race] _ [date&time].jpg**\n",
        "\n",
        "\n",
        "The example above is the filename for image number 0. Let's pray the image we see is a very old white man.🙏🏻\n"
      ],
      "metadata": {
        "id": "145c068M2nOG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "image_paths = sorted(glob.glob(\"UTKFace/*.jpg.chip.jpg\"))\n",
        "\n",
        "images = []\n",
        "ages = []\n",
        "genders = []\n",
        "races = []\n",
        "\n",
        "for path in image_paths:\n",
        "    filename = path[8:].split(\"_\")\n",
        "    if len(filename)==4:\n",
        "        images.append(np.array(Image.open(path)))\n",
        "        ages.append(int(filename[0]))\n",
        "        genders.append(int(filename[1]))\n",
        "        races.append(int(filename[2]))"
      ],
      "metadata": {
        "id": "smyHUqAB2vMU"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "print(len(images))"
      ],
      "metadata": {
        "id": "3gPAgyTI2xCO"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "dataset_dict = #TODO: Create a dictionary"
      ],
      "metadata": {
        "id": "rR_rFCw92yug"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "idx = np.random.randint(len(images))# Hint: Try 19006 for someone who's gonna yell motherfucka at you!\n",
        "\n",
        "plt.imshow(images[idx])\n",
        "plt.show()\n",
        "\n",
        "print(\"Age: \"+str(ages[idx]))\n",
        "print(\"Gender: \"+str(dataset_dict['gender_id'][genders[idx]]))\n",
        "print(\"Race: \"+str(dataset_dict['race_id'][races[idx]]))"
      ],
      "metadata": {
        "id": "BEPuiyx120Tz"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Normalization"
      ],
      "metadata": {
        "id": "AJP8d16A2upW"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "min_age_value, max_age_value = min(ages), max(ages)\n",
        "log_age_values = np.log10(ages)\n",
        "max_age_log_value = log_age_values.max()\n",
        "print('MAX AGE VALUE', max_age_value)\n",
        "print('MIN AGE VALUE', min_age_value)\n",
        "print('MAX AGE LOG VALUE', max_age_log_value)"
      ],
      "metadata": {
        "id": "jgVF1ssa28-x"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def get_normalized_age_value(original_age_value):\n",
        "    return (original_age_value - min_age_value)/(max_age_value - min_age_value)\n",
        "\n",
        "def get_log_age_value(original_age_value):\n",
        "    return np.log10(original_age_value)/max_age_log_value\n",
        "\n",
        "def get_original_age_from_log_value(log_age_value):\n",
        "    return np.exp(log_age_value) * max_age_log_value\n",
        "\n",
        "def get_original_age_value(normalized_age_value):\n",
        "    return normalized_age_value * (max_age_value - min_age_value) + min_age_value"
      ],
      "metadata": {
        "id": "YlkU2PGz2-43"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## More Data Exploration\n",
        "\n",
        "Usually, we should seek for balance in the dataset, we should explore it. Let's not loose to much time here as it's not our #1 Priority.<p>\n",
        "👉 Kaggle has a competition open for this dataset, and they released [a starter for the data visualization](https://www.kaggle.com/svenknoblauch/utkface-data-exploration). Let's use it directly in our work!"
      ],
      "metadata": {
        "id": "aZn9mnMD3BqB"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import seaborn as sns\n",
        "import pandas as pd\n",
        "\n",
        "d = {'age': ages, 'gender': genders, 'race': races}\n",
        "df = pd.DataFrame(data=d)"
      ],
      "metadata": {
        "id": "Msg6QSWR2_g4"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(21, 7))\n",
        "fig.suptitle('age distribution for gender', fontsize=20)\n",
        "\n",
        "df_age_male = df.groupby('gender').get_group(0)\n",
        "df_age_female = df.groupby('gender').get_group(1)\n",
        "sns.histplot(data=df_age_male, x=\"age\", kde=True, color=\"red\", ax=ax1, bins=50)\n",
        "sns.histplot(data=df_age_female, x=\"age\", kde=True, color=\"orange\", ax=ax2, bins=50)\n",
        "ax1.title.set_text(\"male\")\n",
        "ax2.title.set_text(\"female\")\n",
        "\n",
        "\n",
        "sns.kdeplot(data=df_age_male, x=\"age\", color=\"red\", ax=ax3)\n",
        "sns.kdeplot(data=df_age_female, x=\"age\", color=\"orange\", ax=ax3)\n",
        "ax3.legend([\"male\", \"female\"], fontsize=\"large\")\n",
        "ax3.title.set_text(\"male vs female\")\n",
        "\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "kiCk_kJY3Dp5"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "pie, (ax1, ax2) = plt.subplots(1, 2, figsize=[10,6])\n",
        "df.gender.value_counts().plot(kind='pie', labels=[\"male\", \"female\"], pctdistance=0.5, ax = ax1)\n",
        "ax1.yaxis.set_visible(False)\n",
        "ax1.title.set_text('gender distribution')\n",
        "\n",
        "df.race.value_counts().plot(kind='pie', labels=[\"White\", \"Black\", \"Asian\", \"Indian\", \"Others\"], pctdistance=0.5, ax = ax2)\n",
        "ax2.yaxis.set_visible(False)\n",
        "ax2.title.set_text('race distribution')\n",
        "\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "V2NuaQUn3GaQ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(20, 5))\n",
        "fig.suptitle('age distribution for all races', fontsize=20, y=1.1)\n",
        "fig.tight_layout()\n",
        "\n",
        "df_race_white = df.groupby('race').get_group(0)\n",
        "sns.histplot(data=df_race_white, x=\"age\", kde=True, color=\"red\", ax=ax1, bins=40)\n",
        "ax1.title.set_text(\"white mean_age: \"+\"%.2f\" % df_race_white.mean()[\"age\"])\n",
        "\n",
        "df_race_black = df.groupby('race').get_group(1)\n",
        "sns.histplot(data=df_race_black, x=\"age\", kde=True, color=\"orange\", ax=ax2, bins=40)\n",
        "ax2.title.set_text(\"black mean_age: \"+\"%.2f\" % df_race_black.mean()[\"age\"])\n",
        "\n",
        "df_race_asian = df.groupby('race').get_group(2)\n",
        "sns.histplot(data=df_race_asian, x=\"age\", kde=True, color=\"blue\", ax=ax3, bins=40)\n",
        "ax3.title.set_text(\"asian\")\n",
        "ax3.title.set_text(\"asian mean_age: \"+\"%.2f\" % df_race_asian.mean()[\"age\"])\n",
        "\n",
        "df_race_indian = df.groupby('race').get_group(3)\n",
        "sns.histplot(data=df_race_indian, x=\"age\", kde=True, color=\"green\", ax=ax4, bins=40)\n",
        "ax4.title.set_text(\"indian mean_age: \"+\"%.2f\" % df_race_indian.mean()[\"age\"])\n",
        "\n",
        "df_race_other = df.groupby('race').get_group(4)\n",
        "sns.histplot(data=df_race_other, x=\"age\", kde=True, color=\"purple\", ax=ax5, bins=40)\n",
        "ax5.title.set_text(\"other mean_age: \"+\"%.2f\" % df_race_other.mean()[\"age\"])\n",
        "\n",
        "plt.subplots_adjust(left=0.1, bottom=0.1, right=0.9, top=0.9, wspace=0.4, hspace=0.4)\n",
        "\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "ewj0W8d83G--"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 3 — Create a Multi-Task DataLoader with PyTorch\n",
        "\n",
        "In PyTorch, we usually need to create 2 elements:\n",
        "* A Dataset — That will do exactly what we've done right above\n",
        "* A Dataloader — That will convert our data in PyTorch format\n",
        "\n",
        "## Dataset Minimal Code\n",
        "A dataset needs to implement a class that inherits Dataset with at the minimum 3 things:\n",
        "```python\n",
        "def __init__(self):\n",
        "def __len__(self):\n",
        "def __getitem__(self):\n",
        "```\n",
        "Okay, there are two underscores. It doesn't mean anything bad, just that we're overwritting some functions in Python. \n",
        "* The __init__ function is what's called when we create a Dataset. \n",
        "* The __len__ function is what's returned when we check the length of our dataset (with len(dataset)). \n",
        "* The __getitem__ function is what's called when we call for one element of the dataset (with dataset[i])"
      ],
      "metadata": {
        "id": "35DotW3Q2i9h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Just in case your images don't load properly\n",
        "from PIL import ImageFile\n",
        "ImageFile.LOAD_TRUNCATED_IMAGES = True"
      ],
      "metadata": {
        "id": "PiRvwx7K3LAi"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "class UTKFace(Dataset):\n",
        "    def __init__(self, image_paths):\n",
        "        # Mean and Std for ImageNet\n",
        "        mean=[0.485, 0.456, 0.406] # ImageNet\n",
        "        std=[0.229, 0.224, 0.225] # ImageNet\n",
        "\n",
        "        # Define the Transforms\n",
        "        self.transform = #TODO: Create 3 transforms: Resize, To Tensor, Normalize\n",
        "\n",
        "        # Set Inputs and Labels\n",
        "        self.image_paths = image_paths\n",
        "        self.images = []\n",
        "        self.ages = []\n",
        "        self.genders = []\n",
        "        self.races = []\n",
        "\n",
        "        for path in image_paths:\n",
        "            filename = path[8:].split(\"_\")\n",
        "            if len(filename)==4:\n",
        "                self.images.append(path)\n",
        "                self.ages.append(int(filename[0]))\n",
        "                self.genders.append(int(filename[1]))\n",
        "                self.races.append(int(filename[2]))\n",
        "    \n",
        "    def __len__(self):\n",
        "         return len(self.images)\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        # Load an Image\n",
        "        img = #TODO: Load an Image\n",
        "        # Transform it\n",
        "        img = #TODO: Transform it\n",
        "\n",
        "        # Get the Labels\n",
        "        age = #TODO: \n",
        "        gender = #TODO: \n",
        "        race = #TODO: \n",
        "        \n",
        "        # Return the sample of the dataset\n",
        "        sample = #TODO: \n",
        "        return sample"
      ],
      "metadata": {
        "id": "LFHdq6sq3LC8"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Train/Test Split"
      ],
      "metadata": {
        "id": "2Jbjwr0x3QWu"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import torch\n",
        "from torch.utils.data import random_split\n",
        "from torch.utils.data import DataLoader"
      ],
      "metadata": {
        "id": "nLusDMGM3Oc7"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# define the train and val splits\n",
        "TRAIN_SPLIT = #TODO: \n",
        "VAL_SPLIT = #TODO: \n",
        "\n",
        "# set the device we will be using to train the model\n",
        "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")"
      ],
      "metadata": {
        "id": "Xv3W1W_53OfJ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "num_train = round(TRAIN_SPLIT*len(image_paths))\n",
        "num_val = round(VAL_SPLIT*len(image_paths))\n",
        "\n",
        "print('No of train samples', num_train)\n",
        "print('No of validation Samples', num_val)"
      ],
      "metadata": {
        "id": "ZSmueQT-3Oh6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "(train_dataset, valid_dataset) = random_split(image_paths,[num_train, num_val],generator=torch.Generator().manual_seed(42))"
      ],
      "metadata": {
        "id": "WmXKrIJ23VDi"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Dataloader"
      ],
      "metadata": {
        "id": "HhWM2fRERnWT"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "BATCH_SIZE = #TODO: \n",
        "\n",
        "train_dataloader = #TODO: \n",
        "val_dataloader = #TODO: \n",
        "\n",
        "train_steps = len(train_dataloader.dataset) // BATCH_SIZE\n",
        "val_steps = len(val_dataloader.dataset) // BATCH_SIZE"
      ],
      "metadata": {
        "id": "JHRZUgB23Wfc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def imshow(img):\n",
        "    #TODO: "
      ],
      "metadata": {
        "id": "l_Q_l_YU3Xwr"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "sample = next(iter(train_dataloader))\n",
        "\n",
        "imshow(sample['image'][0])\n",
        "plt.show()\n",
        "print(sample[\"age\"][0].item())\n",
        "print(get_normalized_age_value(sample[\"age\"][0].item()))\n",
        "\n",
        "\n",
        "print(dataset_dict['gender_id'][sample[\"gender\"][0].item()])\n",
        "print(dataset_dict['race_id'][sample[\"race\"][0].item()])"
      ],
      "metadata": {
        "id": "9Z7BMFKk3ZNd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 4 — Multi-Task Neural Network with PyTorch\n",
        "\n",
        "In this part, we want to:\n",
        "1. Define a Base Neural Network\n",
        "2. Create the Heads\n",
        "3. Train it\n",
        "4. Test it\n"
      ],
      "metadata": {
        "id": "5uIKTYob3c0L"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Define the Base Model\n",
        "In this part, we're going to load a pretrained Resnet18. **In this part, I'm going to teach you how to add layers to a pretrained network, no matter the nework, so if you'd like to load an Inception v3 and go all Dicaprio on this, be my guest!**\n"
      ],
      "metadata": {
        "id": "qy89rCIq3fbD"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "from torchvision.models import resnet50, resnet101, resnet18, resnet34"
      ],
      "metadata": {
        "id": "MpDvthOm3a3T"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "net = #TODO:"
      ],
      "metadata": {
        "id": "duj0uY7sUw0f"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install torchviz\n",
        "from torchviz import make_dot\n",
        "\n",
        "#make_dot(net(sample[\"image\"].to(device)), params=dict(list(net.named_parameters()))).render(\"ResNet50\", format=\"png\")"
      ],
      "metadata": {
        "id": "JU6lwV27UmJT"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Create the HydraNet class\n",
        "\n",
        "\n",
        "The final layer is called net.fc. If we want to add layers to that, we can always change that layer, or add layers after this one.\n",
        "\n",
        "\n",
        "Here's an example when changing this final layer to a 10 neuron output:\n",
        "```python\n",
        "self.net = models.resnet18(pretrained=True)\n",
        "self.net.fc = nn.Linear(model.fc.in_features, 10)\n",
        "```\n",
        "\n",
        "We can also use the OrderedDict method and have a set of layers (a real head!):\n",
        "```python\n",
        "from collections import OrderedDict\n",
        "self.n_features = self.net.fc.in_features\n",
        "self.net.fc1 = nn.Sequential(OrderedDict([\n",
        "    ('linear', nn.Linear(self.n_features,self.n_features)),\n",
        "    ('relu1', nn.ReLU()),\n",
        "    ('final', nn.Linear(self.n_features, 1))\n",
        "    ]))\n",
        "```"
      ],
      "metadata": {
        "id": "gPAopiy83nic"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from collections import OrderedDict"
      ],
      "metadata": {
        "id": "vUnDAS5X3lKQ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "class HydraNetModified(nn.Module):\n",
        "    def __init__(self, net):\n",
        "        pass\n",
        "        #TODO: \n",
        "        \n",
        "    def forward(self, x):\n",
        "        pass\n",
        "        #TODO: "
      ],
      "metadata": {
        "id": "U3gGEPKX3ltY"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "model = #TODO: \n",
        "model.to(device=device)\n",
        "\n",
        "race_loss = #TODO:\n",
        "gender_loss = #TODO:\n",
        "age_loss = #TODO: \n",
        "\n",
        "lr =#TODO: \n",
        "momentum = 0.09 # Meaning that we will go 9% with the previous direction\n",
        "optimizer = #TODO: "
      ],
      "metadata": {
        "id": "T4Y1KvOf3rcd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "#make_dot(model(sample[\"image\"].to(device)), params=dict(list(model.named_parameters()))).render(\"HydraNet\", format=\"png\")"
      ],
      "metadata": {
        "id": "KBdvX9_mUy6X"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Train the Model\n",
        "Here's a typical training loop in PyTorch\n",
        "```python\n",
        "losses = []\n",
        "for epoch in range(num_epochs):\n",
        "    running_loss = 0.0\n",
        "    for data in dataLoader:\n",
        "        images, labels = data\n",
        "        outputs = model(images)\n",
        "        loss = criterion_label(outputs, labels)\n",
        "        optimizer.zero_grad()\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "\n",
        "        running_loss += loss.item() * images.size(0) \n",
        "\n",
        "    epoch_loss = running_loss / len(dataloaders['train'])\n",
        "    losses.append(epoch_loss)\n",
        "    plt.plot(losses)\n",
        "```\n",
        "\n",
        "In our case, it will be a bit different, as we have several types of losses to plot (3x2)"
      ],
      "metadata": {
        "id": "tkDAtQDW3wQp"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from sklearn.metrics import mean_absolute_error as mae\n",
        "\n",
        "n_epochs = #TODO: \n",
        "\n",
        "logger = {\"train_loss\": list(),\n",
        "          \"validation_loss\": list(),\n",
        "          \"train_gender_loss\": list(),\n",
        "          \"train_race_loss\": list(),\n",
        "          \"train_age_loss\": list(),\n",
        "          \"validation_gender_loss\": list(),\n",
        "          \"validation_race_loss\": list(),\n",
        "          \"validation_age_loss\": list(),\n",
        "          }"
      ],
      "metadata": {
        "id": "DehfMXfL7nyh"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "sig = nn.Sigmoid()\n",
        "\n",
        "for epoch in range(n_epochs):\n",
        "    model.train()\n",
        "\n",
        "    total_training_loss = 0\n",
        "    total_validation_loss = 0\n",
        "    training_gender_loss = 0\n",
        "    training_race_loss = 0\n",
        "    training_age_loss = 0\n",
        "    validation_gender_loss = 0\n",
        "    validation_race_loss = 0\n",
        "    validation_age_loss = 0\n",
        "\n",
        "    for i, data in enumerate(train_dataloader):\n",
        "        inputs = #\n",
        "        age_label = #\n",
        "        gender_label = #\n",
        "        race_label = #\n",
        "\n",
        "        #TODO: Zero Grad\n",
        "        age_output, gender_output, race_output = #\n",
        "        \n",
        "        loss_1 = #\n",
        "        loss_2 = #\n",
        "        loss_3 = #\n",
        "\n",
        "        loss = #\n",
        "\n",
        "        #Backward\n",
        "        #Step\n",
        "        total_training_loss += loss\n",
        "        \n",
        "        training_race_loss += loss_1.item()\n",
        "        training_gender_loss += loss_2.item()\n",
        "        training_age_loss += loss_3.item()\n",
        "    print('EPOCH ', epoch+1)\n",
        "    print(\"Training Losses: Race: {}, Gender: {}, Age: {}\".format(loss_1, loss_2, loss_3))\n",
        "\n",
        "    with torch.no_grad():\n",
        "        model.eval()\n",
        "\n",
        "        for i, data in enumerate(val_dataloader):\n",
        "            inputs = #\n",
        "            age_label = #\n",
        "            gender_label = #\n",
        "            race_label =  #\n",
        "            age_output, gender_output, race_output = #\n",
        "        \n",
        "            loss_1 = #\n",
        "            loss_2 = #\n",
        "            loss_3 = #\n",
        "\n",
        "            loss = #\n",
        "            total_validation_loss += loss\n",
        "\n",
        "            validation_race_loss += loss_1.item()\n",
        "            validation_gender_loss += loss_2.item()\n",
        "            validation_age_loss += loss_3.item()\n",
        "        print(\"Validation Losses: Race: {}, Gender: {}, Age: {}\".format(loss_1, loss_2, loss_3))\n",
        "\n",
        "    avgTrainLoss = total_training_loss / train_steps\n",
        "    avgValLoss = total_validation_loss / val_steps\n",
        "    \n",
        "    print('Average Losses — Training: {} | Validation {}'.format(avgTrainLoss, avgValLoss))\n",
        "    print() \n",
        "    avgTrainGenderLoss = training_gender_loss/len(train_dataloader.dataset)\n",
        "    avgTrainRaceLoss = training_race_loss/len(train_dataloader.dataset)\n",
        "    avgTrainAgeLoss = training_age_loss/len(train_dataloader.dataset)\n",
        "\n",
        "    avgValGenderLoss = validation_gender_loss/len(val_dataloader.dataset)\n",
        "    avgValRaceLoss = validation_race_loss/len(val_dataloader.dataset)\n",
        "    avgValAgeLoss = validation_age_loss/len(val_dataloader.dataset)\n",
        "\n",
        "    logger[\"train_loss\"].append(avgTrainLoss.cpu().detach().numpy())\n",
        "    logger[\"train_gender_loss\"].append(avgTrainGenderLoss)\n",
        "    logger[\"train_race_loss\"].append(avgTrainRaceLoss)\n",
        "    logger[\"train_age_loss\"].append(avgTrainAgeLoss)\n",
        "    \n",
        "    logger[\"validation_loss\"].append(avgValLoss.cpu().detach().numpy())\n",
        "    logger[\"validation_gender_loss\"].append(avgValGenderLoss)\n",
        "    logger[\"validation_race_loss\"].append(avgValRaceLoss)\n",
        "    logger[\"validation_age_loss\"].append(avgValAgeLoss)\n"
      ],
      "metadata": {
        "id": "mcw6i-f93uaR"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "torch.save(model.state_dict(), \"best_model.pth\")"
      ],
      "metadata": {
        "id": "AyeDm4lv4GhE"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "##Show the Results"
      ],
      "metadata": {
        "id": "sDz_2R6833Ar"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "plt.plot(logger[\"train_loss\"])\n",
        "plt.plot(logger[\"validation_loss\"])\n",
        "plt.legend(['Train','Valid'])\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "Mc_pMH9G33JA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "plt.plot(logger[\"train_gender_loss\"])\n",
        "plt.plot(logger[\"validation_gender_loss\"])\n",
        "plt.legend(['Train','Valid'])\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "fAQiwY783-lV"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "plt.plot(logger[\"train_race_loss\"])\n",
        "plt.plot(logger[\"validation_race_loss\"])\n",
        "plt.legend(['Train','Valid'])\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "OLQGIu1T3_NT"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "plt.plot(logger[\"train_age_loss\"])\n",
        "plt.plot(logger[\"validation_age_loss\"])\n",
        "plt.legend(['Train','Valid'])\n",
        "plt.xlabel('Epoch')\n",
        "plt.ylabel('Loss')\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "UsN8rjbL4A5E"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Inference"
      ],
      "metadata": {
        "id": "3uYXZ1vH4HJM"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "indexes = valid_dataset.indices\n",
        "test_indices = indexes[0:10]"
      ],
      "metadata": {
        "id": "qBXkJlIQ4JLt"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "for idx in test_indices:\n",
        "    plt.figure()\n",
        "    plt.imshow(images[idx])\n",
        "    plt.show()\n",
        "\n",
        "    image_norm = #\n",
        "    image_norm = #\n",
        "\n",
        "    model.eval()\n",
        "    age, gender, race = #\n",
        "\n",
        "    predicted_age = #\n",
        "\n",
        "    print(\"Age:\", str(ages[idx]), \"| Predicted:\", str(int(predicted_age)))\n",
        "\n",
        "    sigmoid = nn.Sigmoid()\n",
        "    out_gender = #\n",
        "    gender_classes = [\"male\", \"female\"]\n",
        "    print(\"Gender:\", str(dataset_dict['gender_id'][genders[idx]]), \"| Predicted:\", str(gender_classes[out_gender]))\n",
        "\n",
        "    out_race = #\n",
        "    race_classes = [\"white\", \"black\", \"asian\", \"indian\", \"other\"]\n",
        "    print(\"Race:\", str(dataset_dict['race_id'][races[idx]]), \"| Predicted:\", str(race_classes[out_race]))\n",
        "    print('\\n')"
      ],
      "metadata": {
        "id": "xAOuNQm04K1t"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        ""
      ],
      "metadata": {
        "id": "hkRS32cgVOiA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        ""
      ],
      "metadata": {
        "id": "rtm71tBDD9Ax"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}