{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "Face Depixelizer Eng",
      "provenance": [],
      "private_outputs": true,
      "collapsed_sections": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/tg-bomze/Face-Depixelizer/blob/master/Face_Depixelizer_Eng.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "siqzcgRRyr_n",
        "colab_type": "text"
      },
      "source": [
        "<b><font color=\"black\" size=\"+4\">Face Depixelizer</font></b>\n",
        "\n",
        "Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly.\n",
        "\n",
        "<b><font color=\"black\" size=\"+2\">Based on:</font></b>\n",
        "\n",
        "**GitHub repository**: [PULSE](https://github.com/adamian98/pulse)\n",
        "\n",
        "Article: [PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models](https://arxiv.org/abs/2003.03808)\n",
        "\n",
        "Creators: **[Alex Damian](https://github.com/adamian98), [Sachit Menon](mailto:sachit.menon@duke.edu).**\n",
        "\n",
        "<b><font color=\"black\" size=\"+2\">Colab created by:</font></b>\n",
        "\n",
        "GitHub: [@tg-bomze](https://github.com/tg-bomze),\n",
        "Telegram: [@bomze](https://t.me/bomze),\n",
        "Twitter: [@tg_bomze](https://twitter.com/tg_bomze).\n",
        "\n",
        "---\n",
        "##### <font color='red'>Currently using Google Drive to store model weights and it has a daily cap on downloads, therefore, you may receive an error message saying \"Google Drive Quota Exceeded\" or \"No such file or directory: '/content/pulse/runs/face.png'\". If you are experiencing this error please try again later in the day or come back tomorrow.</font>\n",
        "\n",
        "```\n",
        "To get started, click on the button (where the red arrow indicates). After clicking, wait until the execution is complete.\n",
        "```\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fU0aGtD4Nl4W",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title <b><font color=\"red\" size=\"+3\">←</font><font color=\"black\" size=\"+3\"> Let's ROCK!</font></b>\n",
        "#@markdown **After starting this block, you will need to scroll down page and upload a pixel square photo into which the whole human head fits. Neural network works best on images where people are directly facing the camera. Example:**\n",
        "\n",
        "#@markdown ![example](https://github.com/tg-bomze/Face-Depixelizer/raw/master/example.jpg)\n",
        "\n",
        "#@markdown *You can crop the photo [HERE](https://www.iloveimg.com/crop-image)*\n",
        "\n",
        "#@markdown ---\n",
        "import torch\n",
        "import torchvision\n",
        "from pathlib import Path\n",
        "if not Path(\"PULSE.py\").exists():\n",
        "  if Path(\"pulse\").exists():\n",
        "    %cd /content/pulse\n",
        "  else:\n",
        "    !git clone https://github.com/adamian98/pulse\n",
        "    %cd /content/pulse\n",
        "    !mkdir input/\n",
        "    toPIL = torchvision.transforms.ToPILImage()\n",
        "    toTensor = torchvision.transforms.ToTensor()\n",
        "    from bicubic import BicubicDownSample\n",
        "    D = BicubicDownSample(factor=1)\n",
        "\n",
        "import os\n",
        "from io import BytesIO\n",
        "import matplotlib.pyplot as plt\n",
        "import matplotlib.image as mpimg\n",
        "from PIL import Image\n",
        "from PULSE import PULSE\n",
        "from google.colab import files\n",
        "from bicubic import BicubicDownSample\n",
        "from IPython import display\n",
        "from IPython.display import display\n",
        "from IPython.display import clear_output\n",
        "import numpy as np\n",
        "from drive import open_url\n",
        "from mpl_toolkits.axes_grid1 import ImageGrid\n",
        "%matplotlib inline\n",
        "\n",
        "#@markdown ## Basic settings:\n",
        "#@markdown ##### *If you have already uploaded a photo and just want to experiment with the settings, then uncheck the following checkbox*:\n",
        "upload_new_photo = True #@param {type:\"boolean\"}\n",
        "\n",
        "\n",
        "if upload_new_photo == True:\n",
        "  !rm -rf /content/pulse/input/face.png\n",
        "  clear_output()\n",
        "  uploaded = files.upload()\n",
        "  for fn in uploaded.keys():\n",
        "    print('User uploaded file \"{name}\" with length {length} bytes'.format(\n",
        "        name=fn, length=len(uploaded[fn])))\n",
        "  os.rename(fn, fn.replace(\" \", \"\"))\n",
        "  fn = fn.replace(\" \", \"\")\n",
        "\n",
        "  if(len(uploaded.keys())!=1): raise Exception(\"You need to upload only one image.\")\n",
        "\n",
        "  face = Image.open(fn)\n",
        "  face = face.resize((1024, 1024), Image.ANTIALIAS)\n",
        "  face = face.convert('RGB')\n",
        "  face_name = 'face.png'\n",
        "  face.save(face_name)\n",
        "  %cp $face_name /content/pulse/input/\n",
        "\n",
        "  images = []\n",
        "  imagesHR = []\n",
        "  imagesHR.append(face)\n",
        "  face = toPIL(D(toTensor(face).unsqueeze(0).cuda()).cpu().detach().clamp(0,1)[0])\n",
        "  images.append(face)\n",
        "\n",
        "#@markdown ---\n",
        "#@markdown ## Advanced settings:\n",
        "#@markdown ##### *If you want to make a more accurate result, then modify the following* **DEFAULT** *variables*:\n",
        "\n",
        "seed = 100 #@param {type:\"integer\"}\n",
        "noise_type = 'trainable'  #@param ['zero', 'fixed', 'trainable']\n",
        "optimizer = 'adam'  #@param ['sgd', 'adam','sgdm', 'adamax']\n",
        "learning_rate = 0.4 #@param {type:\"slider\", min:0, max:1, step:0.05}\n",
        "learning_rate_schedule = 'linear1cycledrop'  #@param ['fixed', 'linear1cycle', 'linear1cycledrop']\n",
        "steps = 100 #@param {type:\"slider\", min:100, max:1000, step:50}\n",
        "clear_output()\n",
        "\n",
        "seed = abs(seed)\n",
        "print('Estimated Runtime: {}s.\\n'.format(round(0.23*steps)+6))\n",
        "!python run.py \\\n",
        "  -seed $seed \\\n",
        "  -noise_type $noise_type \\\n",
        "  -opt_name $optimizer \\\n",
        "  -learning_rate $learning_rate \\\n",
        "  -steps $steps \\\n",
        "  -lr_schedule $learning_rate_schedule\n",
        "\n",
        "#@markdown ---\n",
        "#@markdown *If there is an error during execution or the \"**Browse**\" button is not active, try running this block again*\n",
        "\n",
        "fig, (ax1, ax2) = plt.subplots(1, 2)\n",
        "ax1.imshow(mpimg.imread('/content/pulse/input/face.png'))\n",
        "ax1.set_title('Original')\n",
        "ax2.imshow(mpimg.imread('/content/pulse/runs/face.png'))\n",
        "ax2.set_title('Result')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DUfP6_7vTK3b",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title <b><font color=\"red\" size=\"+3\">←</font><font color=\"black\" size=\"+3\"> Download result</font></b>\n",
        "try: files.download('/content/pulse/runs/face.png')\n",
        "except: raise Exception(\"No result image\")"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}