{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Usage\n",
    "\n",
    "**To train a model**: Run 1 ~ 10.\n",
    "\n",
    "**To load model weights**: Run 1 and 4 ~ 7.\n",
    "\n",
    "**To use trained model to swap a single face image**: Run \"**To load model weights**\" and 11.\n",
    "\n",
    "**To use trained model to create a video clips**: Run \"**To load model weights**\", 12 and 13 (or 14).\n",
    "\n",
    "\n",
    "## Index\n",
    "1. [Import Packages](#1)\n",
    "2. [Install Requirements (optional)](#2)\n",
    "3. [Import VGGFace (optional)](#3)\n",
    "4. [Config](#4)\n",
    "5. [Define Models](#5)\n",
    "6. [Load Models](#6)\n",
    "7. [Define Inputs/outputs Variables](#7)\n",
    "8. [Define Loss Function](#8)\n",
    "9. [Utils for loading/displaying images](#9)\n",
    "10. [Start Training](#10)\n",
    "11. [Helper Function: face_swap()](#11)\n",
    "12. [Import Packages for Making Video Clips](#12)\n",
    "13. [Make Video Clips w/o Face Alignment](#13)\n",
    "14. [Make video clips w/ face alignment](#14)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='1'></a>\n",
    "# 1. Import packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from keras.models import Sequential, Model\n",
    "from keras.layers import *\n",
    "from keras.layers.advanced_activations import LeakyReLU\n",
    "from keras.activations import relu\n",
    "from keras.initializers import RandomNormal\n",
    "from keras.applications import *\n",
    "import keras.backend as K\n",
    "from keras.layers.core import Layer\n",
    "from keras.engine import InputSpec\n",
    "from keras import initializers\n",
    "from tensorflow.contrib.distributions import Beta\n",
    "import tensorflow as tf\n",
    "from keras.optimizers import Adam"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from image_augmentation import random_transform\n",
    "from image_augmentation import random_warp\n",
    "from umeyama import umeyama\n",
    "from utils import get_image_paths, load_images, stack_images\n",
    "from pixel_shuffler import PixelShuffler"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "import numpy as np\n",
    "from PIL import Image\n",
    "import cv2\n",
    "import glob\n",
    "from random import randint, shuffle\n",
    "from IPython.display import clear_output\n",
    "from IPython.display import display\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='2'></a>\n",
    "# 2. Install requirements\n",
    "\n",
    "## ========== CAUTION ========== \n",
    "\n",
    "If you are running this jupyter on local machine. Please read [this blog](http://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/) before running the following cells which pip install packages."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# https://github.com/rcmalli/keras-vggface\n",
    "# Skip this cell if you don't want to use perceptual loss\n",
    "#!pip install keras_vggface"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# https://github.com/ageitgey/face_recognition\n",
    "#!pip install face_recognition"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We only import ```face_recognition``` and ```moviepy``` when making videos. They are not necessary in training GAN models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#!pip install moviepy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='3'></a>\n",
    "# 3. Import VGGFace"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from keras_vggface.vggface import VGGFace"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "vggface = VGGFace(include_top=False, model='resnet50', input_shape=(224, 224, 3))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#vggface.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='4'></a>\n",
    "# 4. Config\n",
    "\n",
    "mixup paper: https://arxiv.org/abs/1710.09412\n",
    "\n",
    "Default training data directories: `./faceA/` and `./faceB/`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "K.set_learning_phase(1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "channel_axis=-1\n",
    "channel_first = False"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "IMAGE_SHAPE = (64, 64, 3)\n",
    "nc_in = 3 # number of input channels of generators\n",
    "nc_D_inp = 6 # number of input channels of discriminators\n",
    "\n",
    "use_perceptual_loss = True # This should NOT be changed.\n",
    "use_lsgan = True\n",
    "use_self_attn = False\n",
    "use_instancenorm = False\n",
    "use_mixup = True\n",
    "mixup_alpha = 0.2 # 0.2\n",
    "w_l2 = 1e-4 # weight decay\n",
    "\n",
    "# Adding motion blurs as data augmentation\n",
    "# set True if training data contains images extracted from videos\n",
    "use_da_motion_blur = False \n",
    "\n",
    "batchSize = 8\n",
    "lrD = 1e-4 # Discriminator learning rate\n",
    "lrG = 1e-4 # Generator learning rate\n",
    "\n",
    "# Path of training images\n",
    "img_dirA = './faceA/*.*'\n",
    "img_dirB = './faceB/*.*'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='5'></a>\n",
    "# 5. Define models"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "conv_init = RandomNormal(0, 0.02)\n",
    "gamma_init = RandomNormal(1., 0.02) # for batch normalization"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Scale(Layer):\n",
    "    '''\n",
    "    Code borrows from https://github.com/flyyufelix/cnn_finetune\n",
    "    '''\n",
    "    def __init__(self, weights=None, axis=-1, gamma_init='zero', **kwargs):\n",
    "        self.axis = axis\n",
    "        self.gamma_init = initializers.get(gamma_init)\n",
    "        self.initial_weights = weights\n",
    "        super(Scale, self).__init__(**kwargs)\n",
    "\n",
    "    def build(self, input_shape):\n",
    "        self.input_spec = [InputSpec(shape=input_shape)]\n",
    "\n",
    "        # Compatibility with TensorFlow >= 1.0.0\n",
    "        self.gamma = K.variable(self.gamma_init((1,)), name='{}_gamma'.format(self.name))\n",
    "        self.trainable_weights = [self.gamma]\n",
    "\n",
    "        if self.initial_weights is not None:\n",
    "            self.set_weights(self.initial_weights)\n",
    "            del self.initial_weights\n",
    "\n",
    "    def call(self, x, mask=None):\n",
    "        return self.gamma * x\n",
    "\n",
    "    def get_config(self):\n",
    "        config = {\"axis\": self.axis}\n",
    "        base_config = super(Scale, self).get_config()\n",
    "        return dict(list(base_config.items()) + list(config.items()))\n",
    "\n",
    "\n",
    "def self_attn_block(inp, nc):\n",
    "    '''\n",
    "    Code borrows from https://github.com/taki0112/Self-Attention-GAN-Tensorflow\n",
    "    '''\n",
    "    assert nc//8 > 0, f\"Input channels must be >= 8, but got nc={nc}\"\n",
    "    x = inp\n",
    "    shape_x = x.get_shape().as_list()\n",
    "    \n",
    "    f = Conv2D(nc//8, 1, kernel_initializer=conv_init)(x)\n",
    "    g = Conv2D(nc//8, 1, kernel_initializer=conv_init)(x)\n",
    "    h = Conv2D(nc, 1, kernel_initializer=conv_init)(x)\n",
    "    \n",
    "    shape_f = f.get_shape().as_list()\n",
    "    shape_g = g.get_shape().as_list()\n",
    "    shape_h = h.get_shape().as_list()\n",
    "    flat_f = Reshape((-1, shape_f[-1]))(f)\n",
    "    flat_g = Reshape((-1, shape_g[-1]))(g)\n",
    "    flat_h = Reshape((-1, shape_h[-1]))(h)   \n",
    "    \n",
    "    s = Lambda(lambda x: tf.matmul(x[0], x[1], transpose_b=True))([flat_g, flat_f])\n",
    "\n",
    "    beta = Softmax(axis=-1)(s)\n",
    "    o = Lambda(lambda x: tf.matmul(x[0], x[1]))([beta, flat_h])\n",
    "    o = Reshape(shape_x[1:])(o)\n",
    "    o = Scale()(o)\n",
    "    \n",
    "    out = add([o, inp])\n",
    "    return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "#def batchnorm():\n",
    "#    return BatchNormalization(momentum=0.9, axis=channel_axis, epsilon=1.01e-5, gamma_initializer = gamma_init)\n",
    "\n",
    "def conv_block(input_tensor, f):\n",
    "    x = input_tensor\n",
    "    x = Conv2D(f, kernel_size=3, strides=2, kernel_regularizer=regularizers.l2(w_l2),  \n",
    "               kernel_initializer=conv_init, use_bias=False, padding=\"same\")(x)\n",
    "    x = Activation(\"relu\")(x)\n",
    "    return x\n",
    "\n",
    "def conv_block_d(input_tensor, f, use_instance_norm=False):\n",
    "    x = input_tensor\n",
    "    x = Conv2D(f, kernel_size=4, strides=2, kernel_regularizer=regularizers.l2(w_l2), \n",
    "               kernel_initializer=conv_init, use_bias=False, padding=\"same\")(x)\n",
    "    x = LeakyReLU(alpha=0.2)(x)\n",
    "    return x\n",
    "\n",
    "def res_block(input_tensor, f):\n",
    "    x = input_tensor\n",
    "    x = Conv2D(f, kernel_size=3, kernel_regularizer=regularizers.l2(w_l2), \n",
    "               kernel_initializer=conv_init, use_bias=False, padding=\"same\")(x)\n",
    "    x = LeakyReLU(alpha=0.2)(x)\n",
    "    x = Conv2D(f, kernel_size=3, kernel_regularizer=regularizers.l2(w_l2), \n",
    "               kernel_initializer=conv_init, use_bias=False, padding=\"same\")(x)\n",
    "    x = add([x, input_tensor])\n",
    "    x = LeakyReLU(alpha=0.2)(x)\n",
    "    return x\n",
    "\n",
    "# Legacy\n",
    "#def upscale_block(input_tensor, f):\n",
    "#    x = input_tensor\n",
    "#    x = Conv2DTranspose(f, kernel_size=3, strides=2, use_bias=False, kernel_initializer=conv_init)(x) \n",
    "#    x = LeakyReLU(alpha=0.2)(x)\n",
    "#    return x\n",
    "\n",
    "def upscale_ps(filters, use_norm=True):\n",
    "    def block(x):\n",
    "        x = Conv2D(filters*4, kernel_size=3, kernel_regularizer=regularizers.l2(w_l2), \n",
    "                   kernel_initializer=RandomNormal(0, 0.02), padding='same')(x)\n",
    "        x = LeakyReLU(0.2)(x)\n",
    "        x = PixelShuffler()(x)\n",
    "        return x\n",
    "    return block\n",
    "\n",
    "def Discriminator(nc_in, input_size=64):\n",
    "    inp = Input(shape=(input_size, input_size, nc_in))\n",
    "    #x = GaussianNoise(0.05)(inp)\n",
    "    x = conv_block_d(inp, 64, False)\n",
    "    x = conv_block_d(x, 128, False)\n",
    "    x = self_attn_block(x, 128) if use_self_attn else x\n",
    "    x = conv_block_d(x, 256, False)\n",
    "    x = self_attn_block(x, 256) if use_self_attn else x\n",
    "    out = Conv2D(1, kernel_size=4, kernel_initializer=conv_init, use_bias=False, padding=\"same\")(x)   \n",
    "    return Model(inputs=[inp], outputs=out)\n",
    "\n",
    "def Encoder(nc_in=3, input_size=64):\n",
    "    inp = Input(shape=(input_size, input_size, nc_in))\n",
    "    x = Conv2D(64, kernel_size=5, kernel_initializer=conv_init, use_bias=False, padding=\"same\")(inp)\n",
    "    x = conv_block(x,128)\n",
    "    x = conv_block(x,256)\n",
    "    x = self_attn_block(x, 256) if use_self_attn else x\n",
    "    x = conv_block(x,512) \n",
    "    x = self_attn_block(x, 512) if use_self_attn else x\n",
    "    x = conv_block(x,1024)\n",
    "    x = Dense(1024)(Flatten()(x))\n",
    "    x = Dense(4*4*1024)(x)\n",
    "    x = Reshape((4, 4, 1024))(x)\n",
    "    out = upscale_ps(512)(x)\n",
    "    return Model(inputs=inp, outputs=out)\n",
    "\n",
    "# Legacy, left for someone to try if interested\n",
    "#def Decoder(nc_in=512, input_size=8):\n",
    "#    inp = Input(shape=(input_size, input_size, nc_in))   \n",
    "#    x = upscale_block(inp, 256)\n",
    "#    x = Cropping2D(((0,1),(0,1)))(x)\n",
    "#    x = upscale_block(x, 128)\n",
    "#    x = res_block(x, 128)\n",
    "#    x = Cropping2D(((0,1),(0,1)))(x)\n",
    "#    x = upscale_block(x, 64)\n",
    "#    x = res_block(x, 64)\n",
    "#    x = res_block(x, 64)\n",
    "#    x = Cropping2D(((0,1),(0,1)))(x)\n",
    "#    x = Conv2D(3, kernel_size=5, kernel_initializer=conv_init, use_bias=False, padding=\"same\")(x)\n",
    "#    out = Activation(\"tanh\")(x)\n",
    "#    return Model(inputs=inp, outputs=out)\n",
    "\n",
    "def Decoder_ps(nc_in=512, input_size=8):\n",
    "    input_ = Input(shape=(input_size, input_size, nc_in))\n",
    "    x = input_\n",
    "    x = upscale_ps(256)(x)\n",
    "    x = upscale_ps(128)(x)\n",
    "    x = self_attn_block(x, 128) if use_self_attn else x\n",
    "    x = upscale_ps(64)(x)\n",
    "    x = res_block(x, 64)\n",
    "    x = self_attn_block(x, 64) if use_self_attn else x\n",
    "    #x = Conv2D(4, kernel_size=5, padding='same')(x)   \n",
    "    alpha = Conv2D(1, kernel_size=5, padding='same', activation=\"sigmoid\")(x)\n",
    "    rgb = Conv2D(3, kernel_size=5, padding='same', activation=\"tanh\")(x)\n",
    "    out = concatenate([alpha, rgb])\n",
    "    return Model(input_, out )    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "encoder = Encoder()\n",
    "decoder_A = Decoder_ps()\n",
    "decoder_B = Decoder_ps()\n",
    "\n",
    "x = Input(shape=IMAGE_SHAPE)\n",
    "\n",
    "netGA = Model(x, decoder_A(encoder(x)))\n",
    "netGB = Model(x, decoder_B(encoder(x)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "netDA = Discriminator(nc_D_inp)\n",
    "netDB = Discriminator(nc_D_inp)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='6'></a>\n",
    "# 6. Load Models"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "model loaded.\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    encoder.load_weights(\"models/encoder.h5\")\n",
    "    decoder_A.load_weights(\"models/decoder_A.h5\")\n",
    "    decoder_B.load_weights(\"models/decoder_B.h5\")\n",
    "    netDA.load_weights(\"models/netDA.h5\") \n",
    "    netDB.load_weights(\"models/netDB.h5\") \n",
    "    print (\"Model weights files are successfully loaded\")\n",
    "except:\n",
    "    print (\"Error occurs during loading weights file.\")\n",
    "    pass"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='7'></a>\n",
    "# 7. Define Inputs/Outputs Variables\n",
    "\n",
    "    distorted_A: A (batch_size, 64, 64, 3) tensor, input of generator_A (netGA).\n",
    "    distorted_B: A (batch_size, 64, 64, 3) tensor, input of generator_B (netGB).\n",
    "    fake_A: (batch_size, 64, 64, 3) tensor, output of generator_A (netGA).\n",
    "    fake_B: (batch_size, 64, 64, 3) tensor, output of generator_B (netGB).\n",
    "    mask_A: (batch_size, 64, 64, 1) tensor, mask output of generator_A (netGA).\n",
    "    mask_B: (batch_size, 64, 64, 1) tensor, mask output of generator_B (netGB).\n",
    "    path_A: A function that takes distorted_A as input and outputs fake_A.\n",
    "    path_B: A function that takes distorted_B as input and outputs fake_B.\n",
    "    path_mask_A: A function that takes distorted_A as input and outputs mask_A.\n",
    "    path_mask_B: A function that takes distorted_B as input and outputs mask_B.\n",
    "    path_abgr_A: A function that takes distorted_A as input and outputs concat([mask_A, fake_A]).\n",
    "    path_abgr_B: A function that takes distorted_B as input and outputs concat([mask_B, fake_B]).\n",
    "    real_A: A (batch_size, 64, 64, 3) tensor, target images for generator_A given input distorted_A.\n",
    "    real_B: A (batch_size, 64, 64, 3) tensor, target images for generator_B given input distorted_B."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def cycle_variables(netG):\n",
    "    distorted_input = netG.inputs[0]\n",
    "    fake_output = netG.outputs[0]\n",
    "    alpha = Lambda(lambda x: x[:,:,:, :1])(fake_output)\n",
    "    rgb = Lambda(lambda x: x[:,:,:, 1:])(fake_output)\n",
    "    \n",
    "    masked_fake_output = alpha * rgb + (1-alpha) * distorted_input \n",
    "\n",
    "    fn_generate = K.function([distorted_input], [masked_fake_output])\n",
    "    fn_mask = K.function([distorted_input], [concatenate([alpha, alpha, alpha])])\n",
    "    fn_abgr = K.function([distorted_input], [concatenate([alpha, rgb])])\n",
    "    return distorted_input, fake_output, alpha, fn_generate, fn_mask, fn_abgr"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "distorted_A, fake_A, mask_A, path_A, path_mask_A, path_abgr_A = cycle_variables(netGA)\n",
    "distorted_B, fake_B, mask_B, path_B, path_mask_B, path_abgr_B = cycle_variables(netGB)\n",
    "real_A = Input(shape=IMAGE_SHAPE)\n",
    "real_B = Input(shape=IMAGE_SHAPE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "<a id='8'></a>\n",
    "# 8. Define Loss Function"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Loss function hyper parameters configuration"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Hyper params for generators\n",
    "w_D = 0.1 # Discriminator\n",
    "w_recon = 1. # L1 reconstruvtion loss\n",
    "w_edge = 1. # edge loss\n",
    "w_pl1 = (0.01, 0.1, 0.2, 0.02) # perceptual loss 1 \n",
    "w_pl2 = (0.003, 0.03, 0.1, 0.01) # perceptual loss 2 \n",
    "\n",
    "# Alpha mask regularizations\n",
    "#m_mask = 0.5 # Margin value of alpha mask hinge loss\n",
    "w_mask = 0.1 # hinge loss\n",
    "w_mask_fo = 0.01 # Alpha mask total variation loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "def first_order(x, axis=1):\n",
    "    img_nrows = x.shape[1]\n",
    "    img_ncols = x.shape[2]\n",
    "    if axis == 1:\n",
    "        return K.abs(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, 1:, :img_ncols - 1, :])\n",
    "    elif axis == 2:\n",
    "        return K.abs(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, :img_nrows - 1, 1:, :])\n",
    "    else:\n",
    "        return None   "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "if use_lsgan:\n",
    "    loss_fn = lambda output, target : K.mean(K.abs(K.square(output-target)))\n",
    "else:\n",
    "    loss_fn = lambda output, target : -K.mean(K.log(output+1e-12)*target+K.log(1-output+1e-12)*(1-target))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "def define_loss(netD, real, fake_argb, distorted, vggface_feat=None):   \n",
    "    alpha = Lambda(lambda x: x[:,:,:, :1])(fake_argb)\n",
    "    fake_rgb = Lambda(lambda x: x[:,:,:, 1:])(fake_argb)\n",
    "    fake = alpha * fake_rgb + (1-alpha) * distorted\n",
    "    \n",
    "    if use_mixup:\n",
    "        dist = Beta(mixup_alpha, mixup_alpha)\n",
    "        lam = dist.sample()\n",
    "        mixup = lam * concatenate([real, distorted]) + (1 - lam) * concatenate([fake, distorted])        \n",
    "        output_mixup = netD(mixup)\n",
    "        loss_D = loss_fn(output_mixup, lam * K.ones_like(output_mixup)) \n",
    "        output_fake = netD(concatenate([fake, distorted])) # dummy\n",
    "        loss_G = w_D * loss_fn(output_mixup, (1 - lam) * K.ones_like(output_mixup))\n",
    "    else:\n",
    "        output_real = netD(concatenate([real, distorted])) # positive sample\n",
    "        output_fake = netD(concatenate([fake, distorted])) # negative sample   \n",
    "        loss_D_real = loss_fn(output_real, K.ones_like(output_real))    \n",
    "        loss_D_fake = loss_fn(output_fake, K.zeros_like(output_fake))   \n",
    "        loss_D = loss_D_real + loss_D_fake\n",
    "        loss_G = w_D * loss_fn(output_fake, K.ones_like(output_fake))  \n",
    "    \n",
    "    # Reconstruction loss\n",
    "    loss_G += w_recon * K.mean(K.abs(fake_rgb - real))\n",
    "    \n",
    "    # Edge loss (similar with total variation loss) \n",
    "    loss_G += w_edge * K.mean(K.abs(first_order(fake_rgb, axis=1) - first_order(real, axis=1)))\n",
    "    loss_G += w_edge * K.mean(K.abs(first_order(fake_rgb, axis=2) - first_order(real, axis=2)))\n",
    "    \n",
    "    \n",
    "    # Perceptual Loss\n",
    "    if not vggface_feat is None:\n",
    "        def preprocess_vggface(x):\n",
    "            x = (x + 1)/2 * 255 # channel order: BGR\n",
    "            x -= [91.4953, 103.8827, 131.0912]\n",
    "            return x\n",
    "        pl_params = w_pl1\n",
    "        real_sz224 = tf.image.resize_images(real, [224, 224])\n",
    "        real_sz224 = Lambda(preprocess_vggface)(real_sz224)\n",
    "        \n",
    "        # Perceptial loss for raw output\n",
    "        fake_sz224 = tf.image.resize_images(fake_rgb, [224, 224]) \n",
    "        fake_sz224 = Lambda(preprocess_vggface)(fake_sz224)        \n",
    "        real_feat112, real_feat55, real_feat28, real_feat7 = vggface_feat(real_sz224)\n",
    "        fake_feat112, fake_feat55, fake_feat28, fake_feat7  = vggface_feat(fake_sz224)    \n",
    "        loss_G += pl_params[0] * K.mean(K.abs(fake_feat7 - real_feat7))\n",
    "        loss_G += pl_params[1] * K.mean(K.abs(fake_feat28 - real_feat28))\n",
    "        loss_G += pl_params[2] * K.mean(K.abs(fake_feat55 - real_feat55))\n",
    "        loss_G += pl_params[3] * K.mean(K.abs(fake_feat112 - real_feat112))\n",
    "        \n",
    "        # Perceptial loss for masked output\n",
    "        pl_params = w_pl2\n",
    "        fake_sz224 = tf.image.resize_images(fake, [224, 224]) \n",
    "        fake_sz224 = Lambda(preprocess_vggface)(fake_sz224)\n",
    "        fake_feat112, fake_feat55, fake_feat28, fake_feat7  = vggface_feat(fake_sz224)    \n",
    "        loss_G += pl_params[0] * K.mean(K.abs(fake_feat7 - real_feat7))\n",
    "        loss_G += pl_params[1] * K.mean(K.abs(fake_feat28 - real_feat28))\n",
    "        loss_G += pl_params[2] * K.mean(K.abs(fake_feat55 - real_feat55))\n",
    "        loss_G += pl_params[3] * K.mean(K.abs(fake_feat112 - real_feat112))\n",
    "    \n",
    "    return loss_D, loss_G"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ========== Define Perceptual Loss Model==========\n",
    "if use_perceptual_loss:\n",
    "    vggface.trainable = False\n",
    "    out_size112 = vggface.layers[1].output\n",
    "    out_size55 = vggface.layers[36].output\n",
    "    out_size28 = vggface.layers[78].output\n",
    "    out_size7 = vggface.layers[-2].output\n",
    "    vggface_feat = Model(vggface.input, [out_size112, out_size55, out_size28, out_size7])\n",
    "    vggface_feat.trainable = False\n",
    "else:\n",
    "    vggface_feat = None"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "netDA_train = netGA_train = netDB_train = netGB_train = None"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "def build_training_functions(use_PL=False, use_mask_hinge_loss=False, m_mask=0.5, lr_factor=1):\n",
    "    global netGA, netDA, real_A, fake_A, distorted_A, mask_A\n",
    "    global netGB, netDB, real_B, fake_B, distorted_B, mask_B\n",
    "    global netDA_train, netGA_train, netDB_train, netGB_train\n",
    "    global vggface_feat\n",
    "    global w_mask, w_mask_fo\n",
    "    \n",
    "    if use_PL:\n",
    "        loss_DA, loss_GA = define_loss(netDA, real_A, fake_A, distorted_A, vggface_feat)\n",
    "        loss_DB, loss_GB = define_loss(netDB, real_B, fake_B, distorted_B, vggface_feat)\n",
    "    else:\n",
    "        loss_DA, loss_GA = define_loss(netDA, real_A, fake_A, distorted_A, vggface_feat=None)\n",
    "        loss_DB, loss_GB = define_loss(netDB, real_B, fake_B, distorted_B, vggface_feat=None)\n",
    "\n",
    "    # Alpha mask loss\n",
    "    if not use_mask_hinge_loss:\n",
    "        loss_GA += 1e-3 * K.mean(K.abs(mask_A))\n",
    "        loss_GB += 1e-3 * K.mean(K.abs(mask_B))\n",
    "    else:\n",
    "        loss_GA += w_mask * K.mean(K.maximum(0., m_mask - mask_A))\n",
    "        loss_GB += w_mask * K.mean(K.maximum(0., m_mask - mask_B))\n",
    "        \n",
    "    # Alpha mask total variation loss\n",
    "    loss_GA += w_mask_fo * K.mean(first_order(mask_A, axis=1))\n",
    "    loss_GA += w_mask_fo * K.mean(first_order(mask_A, axis=2))\n",
    "    loss_GB += w_mask_fo * K.mean(first_order(mask_B, axis=1))\n",
    "    loss_GB += w_mask_fo * K.mean(first_order(mask_B, axis=2))\n",
    "    \n",
    "    # L2 weight decay\n",
    "    # https://github.com/keras-team/keras/issues/2662\n",
    "    for loss_tensor in netGA.losses:\n",
    "        loss_GA += loss_tensor\n",
    "    for loss_tensor in netGB.losses:\n",
    "        loss_GB += loss_tensor\n",
    "    for loss_tensor in netDA.losses:\n",
    "        loss_DA += loss_tensor\n",
    "    for loss_tensor in netDB.losses:\n",
    "        loss_DB += loss_tensor\n",
    "    \n",
    "    weightsDA = netDA.trainable_weights\n",
    "    weightsGA = netGA.trainable_weights\n",
    "    weightsDB = netDB.trainable_weights\n",
    "    weightsGB = netGB.trainable_weights\n",
    "\n",
    "    # Adam(..).get_updates(...)\n",
    "    training_updates = Adam(lr=lrD, beta_1=0.5).get_updates(weightsDA,[],loss_DA)\n",
    "    netDA_train = K.function([distorted_A, real_A],[loss_DA], training_updates)\n",
    "    training_updates = Adam(lr=lrG*lr_factor, beta_1=0.5).get_updates(weightsGA,[], loss_GA)\n",
    "    netGA_train = K.function([distorted_A, real_A], [loss_GA], training_updates)\n",
    "\n",
    "    training_updates = Adam(lr=lrD, beta_1=0.5).get_updates(weightsDB,[],loss_DB)\n",
    "    netDB_train = K.function([distorted_B, real_B],[loss_DB], training_updates)\n",
    "    training_updates = Adam(lr=lrG*lr_factor, beta_1=0.5).get_updates(weightsGB,[], loss_GB)\n",
    "    netGB_train = K.function([distorted_B, real_B], [loss_GB], training_updates)\n",
    "    \n",
    "    print (\"Loss configuration:\")\n",
    "    print (\"use_PL = \" + str(use_PL))\n",
    "    print (\"use_mask_hinge_loss = \" + str(use_mask_hinge_loss))\n",
    "    print (\"m_mask = \" + str(m_mask))    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "<a id='9'></a>\n",
    "# 9. Utils For Loading/Displaying Images"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy import ndimage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_motion_blur_kernel(sz=7):\n",
    "    rot_angle = np.random.uniform(-180,180)\n",
    "    kernel = np.zeros((sz,sz))\n",
    "    kernel[int((sz-1)//2), :] = np.ones(sz)\n",
    "    kernel = ndimage.interpolation.rotate(kernel, rot_angle, reshape=False)\n",
    "    kernel = np.clip(kernel, 0, 1)\n",
    "    normalize_factor = 1 / np.sum(kernel)\n",
    "    kernel = kernel * normalize_factor\n",
    "    return kernel\n",
    "\n",
    "def motion_blur(images, sz=7):\n",
    "    # images is a list [image2, image2, ...]\n",
    "    blur_sz = np.random.choice([5, 7, 9, 11])\n",
    "    kernel_motion_blur = get_motion_blur_kernel(blur_sz)\n",
    "    for i, image in enumerate(images):\n",
    "        images[i] = cv2.filter2D(image, -1, kernel_motion_blur).astype(np.float64)\n",
    "    return images"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "def load_data(file_pattern):\n",
    "    return glob.glob(file_pattern)\n",
    "  \n",
    "def random_warp_rev(image):\n",
    "    assert image.shape == (256,256,3)\n",
    "    rand_coverage = np.random.randint(25) + 80 # random warping coverage\n",
    "    rand_scale = np.random.uniform(5., 6.2) # random warping scale\n",
    "    range_ = np.linspace(128-rand_coverage, 128+rand_coverage, 5)\n",
    "    mapx = np.broadcast_to(range_, (5,5))\n",
    "    mapy = mapx.T\n",
    "    mapx = mapx + np.random.normal(size=(5,5), scale=rand_scale)\n",
    "    mapy = mapy + np.random.normal(size=(5,5), scale=rand_scale)\n",
    "    interp_mapx = cv2.resize(mapx, (80,80))[8:72,8:72].astype('float32')\n",
    "    interp_mapy = cv2.resize(mapy, (80,80))[8:72,8:72].astype('float32')\n",
    "    warped_image = cv2.remap(image, interp_mapx, interp_mapy, cv2.INTER_LINEAR)\n",
    "    src_points = np.stack([mapx.ravel(), mapy.ravel()], axis=-1)\n",
    "    dst_points = np.mgrid[0:65:16,0:65:16].T.reshape(-1,2)\n",
    "    mat = umeyama(src_points, dst_points, True)[0:2]\n",
    "    target_image = cv2.warpAffine(image, mat, (64,64))\n",
    "    return warped_image, target_image\n",
    "\n",
    "random_transform_args = {\n",
    "    'rotation_range': 10,\n",
    "    'zoom_range': 0.1,\n",
    "    'shift_range': 0.05,\n",
    "    'random_flip': 0.5,\n",
    "    }\n",
    "def read_image(fn, random_transform_args=random_transform_args):\n",
    "    image = cv2.imread(fn)\n",
    "    image = cv2.resize(image, (256,256)) / 255 * 2 - 1\n",
    "    image = random_transform(image, **random_transform_args)\n",
    "    warped_img, target_img = random_warp_rev(image)\n",
    "    \n",
    "    # Motion blur data augmentation:\n",
    "    # we want the model to learn to preserve motion blurs of input images\n",
    "    if np.random.uniform() < 0.25 and use_da_motion_blur: \n",
    "        warped_img, target_img = motion_blur([warped_img, target_img])\n",
    "    \n",
    "    return warped_img, target_img"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "# A generator function that yields epoch, batchsize of warped_img and batchsize of target_img\n",
    "def minibatch(data, batchsize):\n",
    "    length = len(data)\n",
    "    epoch = i = 0\n",
    "    tmpsize = None  \n",
    "    shuffle(data)\n",
    "    while True:\n",
    "        size = tmpsize if tmpsize else batchsize\n",
    "        if i+size > length:\n",
    "            shuffle(data)\n",
    "            i = 0\n",
    "            epoch+=1        \n",
    "        rtn = np.float32([read_image(data[j]) for j in range(i,i+size)])\n",
    "        i+=size\n",
    "        tmpsize = yield epoch, rtn[:,0,:,:,:], rtn[:,1,:,:,:]       \n",
    "\n",
    "def minibatchAB(dataA, batchsize):\n",
    "    batchA = minibatch(dataA, batchsize)\n",
    "    tmpsize = None    \n",
    "    while True:        \n",
    "        ep1, warped_img, target_img = batchA.send(tmpsize)\n",
    "        tmpsize = yield ep1, warped_img, target_img"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "def showG(test_A, test_B, path_A, path_B):\n",
    "    figure_A = np.stack([\n",
    "        test_A,\n",
    "        np.squeeze(np.array([path_A([test_A[i:i+1]]) for i in range(test_A.shape[0])])),\n",
    "        np.squeeze(np.array([path_B([test_A[i:i+1]]) for i in range(test_A.shape[0])])),\n",
    "        ], axis=1 )\n",
    "    figure_B = np.stack([\n",
    "        test_B,\n",
    "        np.squeeze(np.array([path_B([test_B[i:i+1]]) for i in range(test_B.shape[0])])),\n",
    "        np.squeeze(np.array([path_A([test_B[i:i+1]]) for i in range(test_B.shape[0])])),\n",
    "        ], axis=1 )\n",
    "\n",
    "    figure = np.concatenate([figure_A, figure_B], axis=0 )\n",
    "    figure = figure.reshape((4,7) + figure.shape[1:])\n",
    "    figure = stack_images(figure)\n",
    "    figure = np.clip((figure + 1) * 255 / 2, 0, 255).astype('uint8')\n",
    "    figure = cv2.cvtColor(figure, cv2.COLOR_BGR2RGB)\n",
    "\n",
    "    display(Image.fromarray(figure))\n",
    "    \n",
    "def showG_mask(test_A, test_B, path_A, path_B):\n",
    "    figure_A = np.stack([\n",
    "        test_A,\n",
    "        (np.squeeze(np.array([path_A([test_A[i:i+1]]) for i in range(test_A.shape[0])])))*2-1,\n",
    "        (np.squeeze(np.array([path_B([test_A[i:i+1]]) for i in range(test_A.shape[0])])))*2-1,\n",
    "        ], axis=1 )\n",
    "    figure_B = np.stack([\n",
    "        test_B,\n",
    "        (np.squeeze(np.array([path_B([test_B[i:i+1]]) for i in range(test_B.shape[0])])))*2-1,\n",
    "        (np.squeeze(np.array([path_A([test_B[i:i+1]]) for i in range(test_B.shape[0])])))*2-1,\n",
    "        ], axis=1 )\n",
    "\n",
    "    figure = np.concatenate([figure_A, figure_B], axis=0 )\n",
    "    figure = figure.reshape((4,7) + figure.shape[1:])\n",
    "    figure = stack_images(figure)\n",
    "    figure = np.clip((figure + 1) * 255 / 2, 0, 255).astype('uint8')\n",
    "    figure = cv2.cvtColor(figure, cv2.COLOR_BGR2RGB)\n",
    "\n",
    "    display(Image.fromarray(figure))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='10'></a>\n",
    "# 10. Start Training\n",
    "\n",
    "Show results and save model weights every `display_iters` iterations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "mkdir: cannot create directory ‘models’: File exists\r\n"
     ]
    }
   ],
   "source": [
    "!mkdir models # create ./models directory"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get filenames\n",
    "train_A = load_data(img_dirA)\n",
    "train_B = load_data(img_dirB)\n",
    "\n",
    "assert len(train_A), \"No image found in \" + str(img_dirA)\n",
    "assert len(train_B), \"No image found in \" + str(img_dirB)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "def show_loss_config(loss_config):\n",
    "    for config, value in loss_config.items():\n",
    "        print(str(config) + \" = \" + str(value))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Init. loss config.\n",
    "loss_config = {}\n",
    "loss_config['use_PL'] = False\n",
    "loss_config['use_mask_hinge_loss'] = False\n",
    "loss_config['m_mask'] = 0.5\n",
    "loss_config['lr_factor'] = 1."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "t0 = time.time()\n",
    "gen_iterations = 0\n",
    "epoch = 0\n",
    "errGA_sum = errGB_sum = errDA_sum = errDB_sum = 0\n",
    "\n",
    "display_iters = 300\n",
    "train_batchA = minibatchAB(train_A, batchSize)\n",
    "train_batchB = minibatchAB(train_B, batchSize)\n",
    "\n",
    "# ========== Change TOTAL_ITERS to desired iterations  ========== \n",
    "TOTAL_ITERS = 40000\n",
    "#iter_dec_swap = TOTAL_ITERS - (np.minimum(len(train_A)*15, len(train_B))*15) // batchSize\n",
    "#if iter_dec_swap <= (9*TOTAL_ITERS//10 - display_iters//2):\n",
    "#    iter_dec_swap = 9*TOTAL_ITERS//10 - display_iters//2\n",
    "\n",
    "while gen_iterations <= TOTAL_ITERS: \n",
    "    epoch, warped_A, target_A = next(train_batchA) \n",
    "    epoch, warped_B, target_B = next(train_batchB) \n",
    "    \n",
    "    # Loss function automation\n",
    "    if gen_iterations == 0:\n",
    "        build_training_functions(**loss_config)\n",
    "    elif gen_iterations == (TOTAL_ITERS//5 - display_iters//2):\n",
    "        clear_output()\n",
    "        loss_config['use_PL'] = True\n",
    "        loss_config['use_mask_hinge_loss'] = False\n",
    "        loss_config['m_mask'] = 0.0\n",
    "        build_training_functions(**loss_config)\n",
    "    elif gen_iterations == (TOTAL_ITERS//5 + TOTAL_ITERS//10 - display_iters//2):\n",
    "        clear_output()\n",
    "        loss_config['use_PL'] = True\n",
    "        loss_config['use_mask_hinge_loss'] = True\n",
    "        loss_config['m_mask'] = 0.5\n",
    "        build_training_functions(**loss_config)\n",
    "    elif gen_iterations == (2*TOTAL_ITERS//5 - display_iters//2):\n",
    "        clear_output()\n",
    "        loss_config['use_PL'] = True\n",
    "        loss_config['use_mask_hinge_loss'] = True\n",
    "        loss_config['m_mask'] = 0.25\n",
    "        build_training_functions(**loss_config)\n",
    "    elif gen_iterations == (TOTAL_ITERS//2 - display_iters//2):\n",
    "        clear_output()\n",
    "        loss_config['use_PL'] = True\n",
    "        loss_config['use_mask_hinge_loss'] = True\n",
    "        loss_config['m_mask'] = 0.4\n",
    "        build_training_functions(**loss_config)\n",
    "    elif gen_iterations == (2*TOTAL_ITERS//3 - display_iters//2):\n",
    "        clear_output()\n",
    "        loss_config['use_PL'] = True\n",
    "        loss_config['use_mask_hinge_loss'] = False\n",
    "        loss_config['m_mask'] = 0.1\n",
    "        loss_config['lr_factor'] = 0.3\n",
    "        build_training_functions(**loss_config)\n",
    "    elif gen_iterations == (9*TOTAL_ITERS//10 - display_iters//2):\n",
    "        clear_output()\n",
    "        decoder_A.load_weights(\"models/decoder_B.h5\")\n",
    "        decoder_B.load_weights(\"models/decoder_A.h5\")\n",
    "        loss_config['use_PL'] = True\n",
    "        loss_config['use_mask_hinge_loss'] = True\n",
    "        loss_config['m_mask'] = 0.1\n",
    "        loss_config['lr_factor'] = 0.3\n",
    "        build_training_functions(**loss_config)\n",
    "    \n",
    "    # Train dicriminators for one batch\n",
    "    if gen_iterations % 1 == 0:\n",
    "        errDA  = netDA_train([warped_A, target_A])\n",
    "        errDB  = netDB_train([warped_B, target_B])\n",
    "    errDA_sum +=errDA[0]\n",
    "    errDB_sum +=errDB[0]\n",
    "    \n",
    "    if gen_iterations == 5:\n",
    "        print (\"working.\")\n",
    "\n",
    "    # Train generators for one batch\n",
    "    errGA = netGA_train([warped_A, target_A])\n",
    "    errGB = netGB_train([warped_B, target_B])\n",
    "    errGA_sum += errGA[0]\n",
    "    errGB_sum += errGB[0]\n",
    "    gen_iterations+=1\n",
    "    \n",
    "    if gen_iterations % display_iters == 0:\n",
    "        if gen_iterations % (display_iters) == 0:\n",
    "            clear_output()\n",
    "        show_loss_config(loss_config)\n",
    "        print('[iter %d] Loss_DA: %f Loss_DB: %f Loss_GA: %f Loss_GB: %f time: %f'\n",
    "        % (gen_iterations, errDA_sum/display_iters, errDB_sum/display_iters,\n",
    "           errGA_sum/display_iters, errGB_sum/display_iters, time.time()-t0))   \n",
    "        \n",
    "        # get new batch of images and generate results for visualization\n",
    "        _, wA, tA = train_batchA.send(14)  \n",
    "        _, wB, tB = train_batchB.send(14)\n",
    "        showG(tA, tB, path_A, path_B)   \n",
    "        showG(wA, wB, path_A, path_B)         \n",
    "        showG_mask(tA, tB, path_mask_A, path_mask_B)           \n",
    "        errGA_sum = errGB_sum = errDA_sum = errDB_sum = 0\n",
    "        \n",
    "        # Save models\n",
    "        encoder.save_weights(\"models/encoder.h5\")\n",
    "        decoder_A.save_weights(\"models/decoder_A.h5\")\n",
    "        decoder_B.save_weights(\"models/decoder_B.h5\")\n",
    "        netDA.save_weights(\"models/netDA.h5\")\n",
    "        netDB.save_weights(\"models/netDB.h5\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='11'></a>\n",
    "# 11. Helper Function: face_swap()\n",
    "This function is provided for those who don't have enough VRAM to run dlib's CNN and GAN model at the same time.\n",
    "\n",
    "    INPUTS:\n",
    "        img: A RGB face image of any size.\n",
    "        path_func: a function that is either path_abgr_A or path_abgr_B.\n",
    "    OUPUTS:\n",
    "        result_img: A RGB swapped face image after masking.\n",
    "        result_mask: A single channel uint8 mask image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "def swap_face(img, path_func):\n",
    "    input_size = img.shape\n",
    "    img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) # generator expects BGR input    \n",
    "    ae_input = cv2.resize(img, (64,64))/255. * 2 - 1        \n",
    "    \n",
    "    result = np.squeeze(np.array([path_func([[ae_input]])]))\n",
    "    result_a = result[:,:,0] * 255\n",
    "    result_bgr = np.clip( (result[:,:,1:] + 1) * 255 / 2, 0, 255 )\n",
    "    result_a = np.expand_dims(result_a, axis=2)\n",
    "    result = (result_a/255 * result_bgr + (1 - result_a/255) * ((ae_input + 1) * 255 / 2)).astype('uint8')\n",
    "       \n",
    "    result = cv2.cvtColor(result, cv2.COLOR_BGR2RGB) \n",
    "    result = cv2.resize(result, (input_size[1],input_size[0]))\n",
    "    result_a = np.expand_dims(cv2.resize(result_a, (input_size[1],input_size[0])), axis=2)\n",
    "    return result, result_a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "direction = \"BtoA\" # default trainsforming faceB to faceA\n",
    "\n",
    "if direction is \"AtoB\":\n",
    "    path_func = path_abgr_B\n",
    "elif direction is \"BtoA\":\n",
    "    path_func = path_abgr_A\n",
    "else:\n",
    "    print (\"direction should be either AtoB or BtoA\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_img = plt.imread(\"./TEST_FACE.jpg\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.imshow(input_img)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "result_img, result_mask = swap_face(input_img, path_func)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.imshow(result_img)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.imshow(result_mask[:, :, 0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='12'></a>\n",
    "# 12. Make video clips\n",
    "\n",
    "Given a video as input, the following cells will detect face for each frame using dlib's cnn model. And use trained GAN model to transform detected face into target face. Then output a video with swapped faces."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Imageio: 'ffmpeg.linux64' was not found on your computer; downloading it now.\n",
      "Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg.linux64 (27.2 MB)\n",
      "Downloading: 8192/28549024 bytes (0.02220032/28549024 bytes (7.8%5873664/28549024 bytes (20.69568256/28549024 bytes (33.513271040/28549024 bytes (46.5%16973824/28549024 bytes (59.5%20660224/28549024 bytes (72.4%24363008/28549024 bytes (85.3%27885568/28549024 bytes (97.7%28549024/28549024 bytes (100.0%)\n",
      "  Done\n",
      "File saved as /root/.imageio/ffmpeg/ffmpeg.linux64.\n"
     ]
    }
   ],
   "source": [
    "# Download ffmpeg if needed, which is required by moviepy.\n",
    "\n",
    "#import imageio\n",
    "#imageio.plugins.ffmpeg.download()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "import face_recognition\n",
    "from moviepy.editor import VideoFileClip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='13'></a>\n",
    "# 13. Make video clips w/o face alignment\n",
    "\n",
    "### Default transform: face B to face A"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "use_smoothed_mask = True\n",
    "use_smoothed_bbox = True\n",
    "\n",
    "def kalmanfilter_init(noise_coef):\n",
    "    kf = cv2.KalmanFilter(4,2)\n",
    "    kf.measurementMatrix = np.array([[1,0,0,0],[0,1,0,0]], np.float32)\n",
    "    kf.transitionMatrix = np.array([[1,0,1,0],[0,1,0,1],[0,0,1,0],[0,0,0,1]], np.float32)\n",
    "    kf.processNoiseCov = noise_coef * np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]], np.float32)\n",
    "    return kf\n",
    "\n",
    "def is_higher_than_480p(x):\n",
    "    return (x.shape[0] * x.shape[1]) >= (858*480)\n",
    "\n",
    "def is_higher_than_720p(x):\n",
    "    return (x.shape[0] * x.shape[1]) >= (1280*720)\n",
    "\n",
    "def is_higher_than_1080p(x):\n",
    "    return (x.shape[0] * x.shape[1]) >= (1920*1080)\n",
    "\n",
    "def calibrate_coord(faces, video_scaling_factor):\n",
    "    for i, (x0, y1, x1, y0) in enumerate(faces):\n",
    "        faces[i] = (x0*video_scaling_factor, y1*video_scaling_factor, \n",
    "                    x1*video_scaling_factor, y0*video_scaling_factor)\n",
    "    return faces\n",
    "\n",
    "def get_faces_bbox(image, model=\"cnn\"):  \n",
    "    if is_higher_than_1080p(image):\n",
    "        video_scaling_factor = 4 + video_scaling_offset\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)\n",
    "    elif is_higher_than_720p(image):\n",
    "        video_scaling_factor = 3 + video_scaling_offset\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)  \n",
    "    elif is_higher_than_480p(image):\n",
    "        video_scaling_factor = 2 + video_scaling_offset\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)\n",
    "    elif manually_downscale:\n",
    "        video_scaling_factor = manually_downscale_factor\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)\n",
    "    else:\n",
    "        faces = face_recognition.face_locations(image, model=model)\n",
    "    return faces\n",
    "\n",
    "def get_smoothed_coord(x0, x1, y0, y1, shape, ratio=0.65):\n",
    "    global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "    global frames\n",
    "    if not use_kalman_filter:\n",
    "        x0 = int(ratio * prev_x0 + (1-ratio) * x0)\n",
    "        x1 = int(ratio * prev_x1 + (1-ratio) * x1)\n",
    "        y1 = int(ratio * prev_y1 + (1-ratio) * y1)\n",
    "        y0 = int(ratio * prev_y0 + (1-ratio) * y0)\n",
    "    else:\n",
    "        x0y0 = np.array([x0, y0]).astype(np.float32)\n",
    "        x1y1 = np.array([x1, y1]).astype(np.float32)\n",
    "        if frames == 0:\n",
    "            for i in range(200):\n",
    "                kf0.predict()\n",
    "                kf1.predict()\n",
    "        kf0.correct(x0y0)\n",
    "        pred_x0y0 = kf0.predict()\n",
    "        kf1.correct(x1y1)\n",
    "        pred_x1y1 = kf1.predict()\n",
    "        x0 = np.max([0, pred_x0y0[0][0]]).astype(np.int)\n",
    "        x1 = np.min([shape[0], pred_x1y1[0][0]]).astype(np.int)\n",
    "        y0 = np.max([0, pred_x0y0[1][0]]).astype(np.int)\n",
    "        y1 = np.min([shape[1], pred_x1y1[1][0]]).astype(np.int)\n",
    "        if x0 == x1 or y0 == y1:\n",
    "            x0, y0, x1, y1 = prev_x0, prev_y0, prev_x1, prev_y1\n",
    "    return x0, x1, y0, y1    \n",
    "    \n",
    "def set_global_coord(x0, x1, y0, y1):\n",
    "    global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "    prev_x0 = x0\n",
    "    prev_x1 = x1\n",
    "    prev_y1 = y1\n",
    "    prev_y0 = y0\n",
    "    \n",
    "def generate_face(ae_input, path_abgr, roi_size, roi_image):\n",
    "    result = np.squeeze(np.array([path_abgr([[ae_input]])]))\n",
    "    result_a = result[:,:,0] * 255\n",
    "    result_bgr = np.clip( (result[:,:,1:] + 1) * 255 / 2, 0, 255 )\n",
    "    result_a = cv2.GaussianBlur(result_a ,(7,7),6)\n",
    "    result_a = np.expand_dims(result_a, axis=2)\n",
    "    result = (result_a/255 * result_bgr + (1 - result_a/255) * ((ae_input + 1) * 255 / 2)).astype('uint8')\n",
    "    if use_color_correction:\n",
    "        result = color_hist_match(result, roi_image)\n",
    "    result = cv2.cvtColor(result.astype(np.uint8), cv2.COLOR_BGR2RGB)\n",
    "    result = cv2.resize(result, (roi_size[1],roi_size[0]))\n",
    "    result_a = np.expand_dims(cv2.resize(result_a, (roi_size[1],roi_size[0])), axis=2)\n",
    "    return result, result_a\n",
    "\n",
    "def get_init_mask_map(image):\n",
    "    return np.zeros_like(image)\n",
    "\n",
    "def get_init_comb_img(input_img):\n",
    "    comb_img = np.zeros([input_img.shape[0], input_img.shape[1]*2,input_img.shape[2]])\n",
    "    comb_img[:, :input_img.shape[1], :] = input_img\n",
    "    comb_img[:, input_img.shape[1]:, :] = input_img\n",
    "    return comb_img    \n",
    "\n",
    "def get_init_triple_img(input_img, no_face=False):\n",
    "    if no_face:\n",
    "        triple_img = np.zeros([input_img.shape[0], input_img.shape[1]*3,input_img.shape[2]])\n",
    "        triple_img[:, :input_img.shape[1], :] = input_img\n",
    "        triple_img[:, input_img.shape[1]:input_img.shape[1]*2, :] = input_img      \n",
    "        triple_img[:, input_img.shape[1]*2:, :] = (input_img * .15).astype('uint8')  \n",
    "        return triple_img\n",
    "    else:\n",
    "        triple_img = np.zeros([input_img.shape[0], input_img.shape[1]*3,input_img.shape[2]])\n",
    "        return triple_img\n",
    "\n",
    "def get_mask(roi_image, h, w):\n",
    "    mask = np.zeros_like(roi_image)\n",
    "    mask[h//15:-h//15,w//15:-w//15,:] = 255\n",
    "    mask = cv2.GaussianBlur(mask,(15,15),10)\n",
    "    return mask\n",
    "\n",
    "def hist_match(source, template):\n",
    "    # Code borrow from:\n",
    "    # https://stackoverflow.com/questions/32655686/histogram-matching-of-two-images-in-python-2-x\n",
    "    oldshape = source.shape\n",
    "    source = source.ravel()\n",
    "    template = template.ravel()\n",
    "    s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,\n",
    "                                            return_counts=True)\n",
    "    t_values, t_counts = np.unique(template, return_counts=True)\n",
    "\n",
    "    s_quantiles = np.cumsum(s_counts).astype(np.float64)\n",
    "    s_quantiles /= s_quantiles[-1]\n",
    "    t_quantiles = np.cumsum(t_counts).astype(np.float64)\n",
    "    t_quantiles /= t_quantiles[-1]\n",
    "    interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)\n",
    "\n",
    "    return interp_t_values[bin_idx].reshape(oldshape)\n",
    "\n",
    "def color_hist_match(src_im, tar_im):\n",
    "    #src_im = cv2.cvtColor(src_im, cv2.COLOR_BGR2Lab)\n",
    "    #tar_im = cv2.cvtColor(tar_im, cv2.COLOR_BGR2Lab)\n",
    "    matched_R = hist_match(src_im[:,:,0], tar_im[:,:,0])\n",
    "    matched_G = hist_match(src_im[:,:,1], tar_im[:,:,1])\n",
    "    matched_B = hist_match(src_im[:,:,2], tar_im[:,:,2])\n",
    "    matched = np.stack((matched_R, matched_G, matched_B), axis=2).astype(np.float64)\n",
    "    return matched\n",
    "\n",
    "def process_video(input_img):   \n",
    "    # modify this line to reduce input size\n",
    "    #input_img = input_img[:, input_img.shape[1]//3:2*input_img.shape[1]//3,:] \n",
    "    image = input_img\n",
    "    faces = get_faces_bbox(image, model=\"cnn\")\n",
    "    \n",
    "    if len(faces) == 0:\n",
    "        comb_img = get_init_comb_img(input_img)\n",
    "        triple_img = get_init_triple_img(input_img, no_face=True)\n",
    "        \n",
    "    mask_map = get_init_mask_map(image)\n",
    "    comb_img = get_init_comb_img(input_img)\n",
    "    global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "    global frames    \n",
    "    for (x0, y1, x1, y0) in faces:        \n",
    "        # smoothing bounding box\n",
    "        if use_smoothed_bbox:\n",
    "            if frames != 0:\n",
    "                x0, x1, y0, y1 = get_smoothed_coord(x0, x1, y0, y1, \n",
    "                                                    image.shape, \n",
    "                                                    ratio=0.65 if use_kalman_filter else bbox_moving_avg_coef)\n",
    "                set_global_coord(x0, x1, y0, y1)\n",
    "                frames += 1\n",
    "            else:\n",
    "                set_global_coord(x0, x1, y0, y1)\n",
    "                _ = get_smoothed_coord(x0, x1, y0, y1, image.shape)\n",
    "                frames += 1\n",
    "        h = x1 - x0\n",
    "        w = y1 - y0\n",
    "            \n",
    "        cv2_img = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)\n",
    "        roi_image = cv2_img[x0+h//15:x1-h//15,y0+w//15:y1-w//15,:]\n",
    "        roi_size = roi_image.shape  \n",
    "        \n",
    "        ae_input = cv2.resize(roi_image, (64,64))/255. * 2 - 1        \n",
    "        result, result_a = generate_face(ae_input, path_abgr_A, roi_size, roi_image)\n",
    "        mask_map[x0+h//15:x1-h//15, y0+w//15:y1-w//15,:] = result_a\n",
    "        mask_map = np.clip(mask_map + .15 * input_img, 0, 255 )     \n",
    "        \n",
    "        if use_smoothed_mask:\n",
    "            mask = get_mask(roi_image, h, w)\n",
    "            roi_rgb = cv2.cvtColor(roi_image, cv2.COLOR_BGR2RGB)\n",
    "            smoothed_result = mask/255 * result + (1-mask/255) * roi_rgb\n",
    "            comb_img[x0+h//15:x1-h//15, input_img.shape[1]+y0+w//15:input_img.shape[1]+y1-w//15,:] = smoothed_result\n",
    "        else:\n",
    "            comb_img[x0+h//15:x1-h//15, input_img.shape[1]+y0+w//15:input_img.shape[1]+y1-w//15,:] = result\n",
    "            \n",
    "        triple_img = get_init_triple_img(input_img)\n",
    "        triple_img[:, :input_img.shape[1]*2, :] = comb_img\n",
    "        triple_img[:, input_img.shape[1]*2:, :] = mask_map\n",
    "    \n",
    "    # ========== Change the following line for different output type==========\n",
    "    # return comb_img[:, input_img.shape[1]:, :]  # return only result image\n",
    "    # return comb_img  # return input and result image combined as one\n",
    "    return triple_img #return input,result and mask heatmap image combined as one"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Description**\n",
    "```python\n",
    "    video_scaling_offset = 0 # Increase by 1 if OOM happens.\n",
    "    manually_downscale = False # Set True if increasing offset doesn't help\n",
    "    manually_downscale_factor = int(2) # Increase by 1 if OOM still happens.\n",
    "    use_color_correction = False # Option for color corretion\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "use_kalman_filter = True\n",
    "\n",
    "if use_kalman_filter:\n",
    "    noise_coef = 5e-3 # Increase by 10x if tracking is slow. \n",
    "    kf0 = kalmanfilter_init(noise_coef)\n",
    "    kf1 = kalmanfilter_init(noise_coef)\n",
    "else:\n",
    "    bbox_moving_avg_coef = 0.65"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[MoviePy] >>>> Building video tmp_sh_test_clipped3.mp4\n",
      "[MoviePy] Writing video tmp_sh_test_clipped3.mp4\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████▉| 540/541 [01:50<00:00,  4.92it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[MoviePy] Done.\n",
      "[MoviePy] >>>> Video ready: tmp_sh_test_clipped3.mp4 \n",
      "\n",
      "CPU times: user 1min 33s, sys: 17.1 s, total: 1min 50s\n",
      "Wall time: 1min 51s\n"
     ]
    }
   ],
   "source": [
    "# Variables for smoothing bounding box\n",
    "global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "global frames\n",
    "prev_x0 = prev_x1 = prev_y0 = prev_y1 = 0\n",
    "frames = 0\n",
    "video_scaling_offset = 0 \n",
    "manually_downscale = False\n",
    "manually_downscale_factor = int(2) # should be an positive integer\n",
    "use_color_correction = False\n",
    "\n",
    "output = 'OUTPUT_VIDEO.mp4'\n",
    "clip1 = VideoFileClip(\"INPUT_VIDEO.mp4\")\n",
    "clip = clip1.fl_image(process_video)#.subclip(11, 13) #NOTE: this function expects color images!!\n",
    "%time clip.write_videofile(output, audio=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### gc.collect() sometimes solves memory error"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 111,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "603"
      ]
     },
     "execution_count": 111,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import gc\n",
    "gc.collect()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='14'></a>\n",
    "# 14. Make video clips w/ face alignment\n",
    "\n",
    "### Default transform: face B to face A\n",
    "\n",
    "The code is not refined. Also I can't tell if face alignment improves the result.\n",
    "\n",
    "Code reference: https://github.com/nlhkh/face-alignment-dlib"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "use_smoothed_mask = True\n",
    "apply_face_aln = True\n",
    "use_poisson_blending = False # SeamlessClone is NOT recommended for video.\n",
    "use_comp_video = True # output a comparison video before/after face swap\n",
    "use_smoothed_bbox = True\n",
    "\n",
    "def kalmanfilter_init(noise_coef):\n",
    "    kf = cv2.KalmanFilter(4,2)\n",
    "    kf.measurementMatrix = np.array([[1,0,0,0],[0,1,0,0]], np.float32)\n",
    "    kf.transitionMatrix = np.array([[1,0,1,0],[0,1,0,1],[0,0,1,0],[0,0,0,1]], np.float32)\n",
    "    kf.processNoiseCov = noise_coef * np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]], np.float32)\n",
    "    return kf\n",
    "\n",
    "def is_higher_than_480p(x):\n",
    "    return (x.shape[0] * x.shape[1]) >= (858*480)\n",
    "\n",
    "def is_higher_than_720p(x):\n",
    "    return (x.shape[0] * x.shape[1]) >= (1280*720)\n",
    "\n",
    "def is_higher_than_1080p(x):\n",
    "    return (x.shape[0] * x.shape[1]) >= (1920*1080)\n",
    "\n",
    "def calibrate_coord(faces, video_scaling_factor):\n",
    "    for i, (x0, y1, x1, y0) in enumerate(faces):\n",
    "        faces[i] = (x0*video_scaling_factor, y1*video_scaling_factor, \n",
    "                    x1*video_scaling_factor, y0*video_scaling_factor)\n",
    "    return faces\n",
    "\n",
    "def get_faces_bbox(image, model=\"cnn\"):  \n",
    "    if is_higher_than_1080p(image):\n",
    "        video_scaling_factor = 4 + video_scaling_offset\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)\n",
    "    elif is_higher_than_720p(image):\n",
    "        video_scaling_factor = 3 + video_scaling_offset\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)  \n",
    "    elif is_higher_than_480p(image):\n",
    "        video_scaling_factor = 2 + video_scaling_offset\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)\n",
    "    elif manually_downscale:\n",
    "        video_scaling_factor = manually_downscale_factor\n",
    "        resized_image = cv2.resize(image, \n",
    "                                   (image.shape[1]//video_scaling_factor, image.shape[0]//video_scaling_factor))\n",
    "        faces = face_recognition.face_locations(resized_image, model=model)\n",
    "        faces = calibrate_coord(faces, video_scaling_factor)\n",
    "    else:\n",
    "        faces = face_recognition.face_locations(image, model=model)\n",
    "    return faces\n",
    "\n",
    "def get_smoothed_coord(x0, x1, y0, y1, shape, ratio=0.65):\n",
    "    global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "    global frames\n",
    "    if not use_kalman_filter:\n",
    "        x0 = int(ratio * prev_x0 + (1-ratio) * x0)\n",
    "        x1 = int(ratio * prev_x1 + (1-ratio) * x1)\n",
    "        y1 = int(ratio * prev_y1 + (1-ratio) * y1)\n",
    "        y0 = int(ratio * prev_y0 + (1-ratio) * y0)\n",
    "    else:\n",
    "        x0y0 = np.array([x0, y0]).astype(np.float32)\n",
    "        x1y1 = np.array([x1, y1]).astype(np.float32)\n",
    "        if frames == 0:\n",
    "            for i in range(200):\n",
    "                kf0.predict()\n",
    "                kf1.predict()\n",
    "        kf0.correct(x0y0)\n",
    "        pred_x0y0 = kf0.predict()\n",
    "        kf1.correct(x1y1)\n",
    "        pred_x1y1 = kf1.predict()\n",
    "        x0 = np.max([0, pred_x0y0[0][0]]).astype(np.int)\n",
    "        x1 = np.min([shape[0], pred_x1y1[0][0]]).astype(np.int)\n",
    "        y0 = np.max([0, pred_x0y0[1][0]]).astype(np.int)\n",
    "        y1 = np.min([shape[1], pred_x1y1[1][0]]).astype(np.int)\n",
    "        if x0 == x1 or y0 == y1:\n",
    "            x0, y0, x1, y1 = prev_x0, prev_y0, prev_x1, prev_y1\n",
    "    return x0, x1, y0, y1    \n",
    "    \n",
    "def set_global_coord(x0, x1, y0, y1):\n",
    "    global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "    prev_x0 = x0\n",
    "    prev_x1 = x1\n",
    "    prev_y1 = y1\n",
    "    prev_y0 = y0\n",
    "    \n",
    "def extract_eye_center(shape):\n",
    "    xs = 0\n",
    "    ys = 0\n",
    "    for pnt in shape:\n",
    "        xs += pnt[0]\n",
    "        ys += pnt[1]\n",
    "    return ((xs//6), ys//6)\n",
    "\n",
    "def get_rotation_matrix(p1, p2):\n",
    "    angle = angle_between_2_points(p1, p2)\n",
    "    x1, y1 = p1\n",
    "    x2, y2 = p2\n",
    "    xc = (x1 + x2) // 2\n",
    "    yc = (y1 + y2) // 2\n",
    "    M = cv2.getRotationMatrix2D((xc, yc), angle, 1)\n",
    "    return M, (xc, yc), angle\n",
    "\n",
    "def angle_between_2_points(p1, p2):\n",
    "    x1, y1 = p1\n",
    "    x2, y2 = p2\n",
    "    if x1 == x2:\n",
    "        return 90\n",
    "    tan = (y2 - y1) / (x2 - x1)\n",
    "    return np.degrees(np.arctan(tan))\n",
    "\n",
    "def get_rotated_img(img, det):\n",
    "    #print (det, img.shape)\n",
    "    shape = face_recognition.face_landmarks(img, det)\n",
    "    pnts_left_eye = shape[0][\"left_eye\"]\n",
    "    pnts_right_eye = shape[0][\"right_eye\"]\n",
    "    if len(pnts_left_eye) == 0 or len(pnts_right_eye) == 0:\n",
    "        return img, None, None    \n",
    "    le_center = extract_eye_center(shape[0][\"left_eye\"])\n",
    "    re_center = extract_eye_center(shape[0][\"right_eye\"])\n",
    "    M, center, angle = get_rotation_matrix(le_center, re_center)\n",
    "    M_inv = cv2.getRotationMatrix2D(center, -1*angle, 1)    \n",
    "    rotated = cv2.warpAffine(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_CUBIC)    \n",
    "    return rotated, M, M_inv, center\n",
    "\n",
    "def hist_match(source, template):\n",
    "    # Code borrow from:\n",
    "    # https://stackoverflow.com/questions/32655686/histogram-matching-of-two-images-in-python-2-x\n",
    "    oldshape = source.shape\n",
    "    source = source.ravel()\n",
    "    template = template.ravel()\n",
    "    s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,\n",
    "                                            return_counts=True)\n",
    "    t_values, t_counts = np.unique(template, return_counts=True)\n",
    "\n",
    "    s_quantiles = np.cumsum(s_counts).astype(np.float64)\n",
    "    s_quantiles /= s_quantiles[-1]\n",
    "    t_quantiles = np.cumsum(t_counts).astype(np.float64)\n",
    "    t_quantiles /= t_quantiles[-1]\n",
    "    interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)\n",
    "\n",
    "    return interp_t_values[bin_idx].reshape(oldshape)\n",
    "\n",
    "def color_hist_match(src_im, tar_im):\n",
    "    matched_R = hist_match(src_im[:,:,0], tar_im[:,:,0])\n",
    "    matched_G = hist_match(src_im[:,:,1], tar_im[:,:,1])\n",
    "    matched_B = hist_match(src_im[:,:,2], tar_im[:,:,2])\n",
    "    matched = np.stack((matched_R, matched_G, matched_B), axis=2).astype(np.float64)\n",
    "    return matched\n",
    "\n",
    "def process_video(input_img):   \n",
    "    image = input_img\n",
    "    # ========== Decrease image size if getting memory error ==========\n",
    "    #image = input_img[:3*input_img.shape[0]//4, :, :]\n",
    "    #image = cv2.resize(image, (image.shape[1]//2,image.shape[0]//2))\n",
    "    orig_image = np.array(image)\n",
    "    faces = get_faces_bbox(image, model=\"cnn\")\n",
    "    \n",
    "    if len(faces) == 0:\n",
    "        comb_img = np.zeros([orig_image.shape[0], orig_image.shape[1]*2,orig_image.shape[2]])\n",
    "        comb_img[:, :orig_image.shape[1], :] = orig_image\n",
    "        comb_img[:, orig_image.shape[1]:, :] = orig_image\n",
    "        if use_comp_video:\n",
    "            return comb_img\n",
    "        else:\n",
    "            return image\n",
    "    \n",
    "    global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "    global frames\n",
    "    for (x0, y1, x1, y0) in faces:        \n",
    "        # smoothing bounding box\n",
    "        if use_smoothed_bbox:\n",
    "            if frames != 0:\n",
    "                x0, x1, y0, y1 = get_smoothed_coord(x0, x1, y0, y1, \n",
    "                                                    image.shape, \n",
    "                                                    ratio=0.65 if use_kalman_filter else bbox_moving_avg_coef)\n",
    "                set_global_coord(x0, x1, y0, y1)\n",
    "                frames += 1\n",
    "            else:\n",
    "                set_global_coord(x0, x1, y0, y1)\n",
    "                _ = get_smoothed_coord(x0, x1, y0, y1, image.shape)\n",
    "                frames += 1      \n",
    "        h = x1 - x0\n",
    "        w = y1 - y0\n",
    "                \n",
    "        if apply_face_aln:\n",
    "            do_back_rot = True\n",
    "            image, M, M_inv, center = get_rotated_img(image, [(x0, y1, x1, y0)])\n",
    "            if M is None:\n",
    "                do_back_rot = False\n",
    "        \n",
    "        cv2_img = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) \n",
    "        roi_image = cv2_img[x0+h//15:x1-h//15, y0+w//15:y1-w//15, :]\n",
    "        roi_size = roi_image.shape            \n",
    "        \n",
    "        if use_smoothed_mask:\n",
    "            mask = np.zeros_like(roi_image)\n",
    "            #print (roi_image.shape, mask.shape)\n",
    "            mask[h//15:-h//15,w//15:-w//15,:] = 255\n",
    "            mask = cv2.GaussianBlur(mask,(15,15),10)\n",
    "            roi_image_rgb = cv2.cvtColor(roi_image, cv2.COLOR_BGR2RGB)\n",
    "        \n",
    "        ae_input = cv2.resize(roi_image, (64,64))/255. * 2 - 1        \n",
    "        result = np.squeeze(np.array([path_abgr_A([[ae_input]])]))\n",
    "        result_a = result[:,:,0] * 255\n",
    "        result_bgr = np.clip( (result[:,:,1:] + 1) * 255 / 2, 0, 255 )\n",
    "        result_a = cv2.GaussianBlur(result_a ,(7,7),6)\n",
    "        result_a = np.expand_dims(result_a, axis=2)\n",
    "        result = (result_a/255 * result_bgr + (1 - result_a/255) * ((ae_input + 1) * 255 / 2)).astype('uint8')\n",
    "        if use_color_correction:\n",
    "            result = color_hist_match(result, roi_image)\n",
    "        result = cv2.cvtColor(result.astype(np.uint8), cv2.COLOR_BGR2RGB)\n",
    "        result = cv2.resize(result, (roi_size[1],roi_size[0]))        \n",
    "        result_img = np.array(orig_image)\n",
    "        \n",
    "        if use_smoothed_mask and not use_poisson_blending:\n",
    "            image[x0+h//15:x1-h//15, y0+w//15:y1-w//15,:] = mask/255*result + (1-mask/255)*roi_image_rgb\n",
    "        elif use_poisson_blending:\n",
    "            c = (y0+w//2, x0+h//2)\n",
    "            image = cv2.seamlessClone(result, image, mask, c, cv2.NORMAL_CLONE)     \n",
    "            \n",
    "        if do_back_rot:\n",
    "            image = cv2.warpAffine(image, M_inv, (image.shape[1], image.shape[0]), flags=cv2.INTER_CUBIC)\n",
    "            result_img[x0+h//15:x1-h//15, y0+w//15:y1-w//15,:] = image[x0+h//15:x1-h//15, y0+w//15:y1-w//15,:]\n",
    "        else:\n",
    "            result_img[x0+h//15:x1-h//15, y0+w//15:y1-w//15,:] = image[x0+h//15:x1-h//15, y0+w//15:y1-w//15,:]   \n",
    "\n",
    "        if use_comp_video:\n",
    "            comb_img = np.zeros([orig_image.shape[0], orig_image.shape[1]*2,orig_image.shape[2]])\n",
    "            comb_img[:, :orig_image.shape[1], :] = orig_image\n",
    "            comb_img[:, orig_image.shape[1]:, :] = result_img\n",
    "            \n",
    "    if use_comp_video:\n",
    "        return comb_img\n",
    "    else:\n",
    "        return result_img"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Description**\n",
    "```python\n",
    "    video_scaling_offset = 0 # Increase by 1 if OOM happens.\n",
    "    manually_downscale = False # Set True if increasing offset doesn't help\n",
    "    manually_downscale_factor = int(2) # Increase by 1 if OOM still happens.\n",
    "    use_color_correction = False # Option for color corretion\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "use_kalman_filter = True\n",
    "\n",
    "if use_kalman_filter:\n",
    "    noise_coef = 5e-3 # Increase by 10x if tracking is slow. \n",
    "    kf0 = kalmanfilter_init(noise_coef)\n",
    "    kf1 = kalmanfilter_init(noise_coef)\n",
    "else:\n",
    "    bbox_moving_avg_coef = 0.65"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Variables for smoothing bounding box\n",
    "global prev_x0, prev_x1, prev_y0, prev_y1\n",
    "global frames\n",
    "prev_x0 = prev_x1 = prev_y0 = prev_y1 = 0\n",
    "frames = 0\n",
    "video_scaling_offset = 0 \n",
    "manually_downscale = False\n",
    "manually_downscale_factor = int(2) # should be an positive integer\n",
    "use_color_correction = False\n",
    "\n",
    "output = 'OUTPUT_VIDEO.mp4'\n",
    "clip1 = VideoFileClip(\"TEST_VIDEO.mp4\")\n",
    "# .subclip(START_SEC, END_SEC) for testing\n",
    "clip = clip1.fl_image(process_video)#.subclip(1, 5) #NOTE: this function expects color images!!\n",
    "%time clip.write_videofile(output, audio=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
