{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "fomm-live.ipynb",
      "private_outputs": true,
      "provenance": [],
      "machine_shape": "hm",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/eyaler/avatars4all/blob/master/fomm_live.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9duzzorgTWLt"
      },
      "source": [
        "# Demo for paper \"First Order Motion Model for Image Animation\"\n",
        "\n",
        "## **Live webcam in the browser!**\n",
        "\n",
        "### Original project: https://aliaksandrsiarohin.github.io/first-order-model-website\n",
        "\n",
        "#### Made just a little bit more accessible by Eyal Gruss ([@eyaler](https://twitter.com/eyaler) / [eyalgruss.com](https://eyalgruss.com) / [eyalgruss@gmail.com](mailto:eyalgruss@gmail.com))\n",
        "\n",
        "#### Short link here: https://j.mp/cam2head\n",
        "\n",
        "##### Click below for more refrences:"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "##### Original notebook: https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb\n",
        "\n",
        "##### Faceswap notebook: https://colab.research.google.com/github/AliaksandrSiarohin/motion-cosegmentation/blob/master/part_swap.ipynb\n",
        "\n",
        "##### Notebook with video enhancement: https://colab.research.google.com/github/tg-bomze/Face-Image-Motion-Model/blob/master/Face_Image_Motion_Model_(Photo_2_Video)_Eng.ipynb\n",
        "\n",
        "##### Avatarify - a live vesrsion (requires local installation): https://github.com/alievk/avatarify\n",
        "\n",
        "##### This live Colab solution is heavily based on the WebSocket implementation: https://github.com/a2kiti/webCamGoogleColab, https://qiita.com/a2kiti/items/f32de4f51a31d609e5a5\n",
        "\n",
        "##### Other notable attempts based on WebRTC and aioRTC (https://github.com/aiortc/aiortc):\n",
        "##### https://github.com/thefonseca/colabrtc\n",
        "##### https://github.com/l4rz/first-order-model/tree/master/webrtc\n",
        "##### https://gist.github.com/myagues/aac0c597f8ad0fa7ebe7d017b0c5603b\n",
        "##### https://colab.research.google.com/github/eyaler/avatars4all/blob/master/incomplete_webrtc_fomm_live.ipynb (EG)\n",
        "\n",
        "##### Randomly generated images from:\n",
        "##### https://thispersondoesnotexist.com\n",
        "##### https://fakeface.rest\n",
        "##### https://www.thiswaifudoesnotexist.net\n",
        "##### https://thisfursonadoesnotexist.com\n",
        "##### https://eyalgruss.com/thismuppetdoesnotexist (@norod78, EG)\n",
        "\n",
        "#### **Stuff I made**:\n",
        "##### Avatars4all repository: https://github.com/eyaler/avatars4all\n",
        "##### Notebook for live webcam in the browser: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/fomm_live.ipynb\n",
        "##### Notebook for talking head model: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/fomm_bibi.ipynb\n",
        "##### Notebook for full body models (FOMM): https://colab.research.google.com/github/eyaler/avatars4all/blob/master/fomm_fufu.ipynb\n",
        "##### Notebook for full body models (impersonator): https://colab.research.google.com/github/eyaler/avatars4all/blob/master/ganozli.ipynb\n",
        "##### Notebook for full body models (impersonator++): https://colab.research.google.com/github/eyaler/avatars4all/blob/master/ganivut.ipynb\n",
        "##### Notebook for Wav2Lip audio based lip syncing: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/melaflefon.ipynb\n",
        "##### List of more generative tools (outdated): https://j.mp/generativetools"
      ],
      "metadata": {
        "id": "lNQ3xL4odXHX"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Run me!"
      ],
      "metadata": {
        "id": "bmeoiOI4dgnW"
      }
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XadNYjWOJ1cw",
        "cellView": "form"
      },
      "source": [
        "#@title Setup\n",
        "#@markdown For best performance make sure you have a good internet connection.\n",
        "machine = !nvidia-smi -L\n",
        "print(machine)\n",
        "\n",
        "%cd /content\n",
        "!git clone --depth 1 https://github.com/eyaler/first-order-model\n",
        "!wget --no-check-certificate -nc https://openavatarify.s3.amazonaws.com/weights/vox-adv-cpk.pth.tar\n",
        "!wget --no-check-certificate -nc https://eyalgruss.com/fomm/vox-adv-cpk.pth.tar\n",
        "\n",
        "!mkdir -p /root/.cache/torch/hub/checkpoints\n",
        "%cd /root/.cache/torch/hub/checkpoints\n",
        "!wget --no-check-certificate -nc https://eyalgruss.com/fomm/s3fd-619a316812.pth\n",
        "!wget --no-check-certificate -nc https://eyalgruss.com/fomm/2DFAN4-11f355bf06.pth.tar\n",
        "%cd /content\n",
        "\n",
        "!pip install git+https://github.com/1adrianb/face-alignment@v1.0.1\n",
        "\n",
        "# !wget --no-check-certificate -nc https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz\n",
        "# !wget --no-check-certificate -nc https://eyalgruss.com/fomm/ngrok-v3-stable-linux-amd64.tgz\n",
        "# !tar xvzf ngrok-v3-stable-linux-amd64.tgz\n",
        "!wget --no-check-certificate -nc https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64\n",
        "!wget --no-check-certificate -nc https://eyalgruss.com/fomm/cloudflared/releases/latest/download/cloudflared-linux-amd64\n",
        "!mv cloudflared-linux-amd64 cloudflared\n",
        "!chmod +x cloudflared\n",
        "\n",
        "!pip install bottle\n",
        "!pip install bottle_websocket\n",
        "!pip install wsaccel ujson\n",
        "!pip install gevent\n",
        "\n",
        "import warnings\n",
        "warnings.filterwarnings(\"ignore\")\n",
        "from IPython.display import display, Javascript\n",
        "from google.colab.output import eval_js\n",
        "\n",
        "def use_cam(url, quality=0.8):\n",
        "  print(\"start camera\")\n",
        "  js = Javascript('''\n",
        "    console.clear();\n",
        "    async function useCam(url, quality) {\n",
        "\n",
        "      const fps = document.createElement('div');\n",
        "      fps.style.marginTop = \"16px\";\n",
        "      document.body.appendChild(fps);\n",
        "      const panel = document.createElement('div');\n",
        "\n",
        "      function on_dragover (event)\n",
        "      {\n",
        "          event.preventDefault();\n",
        "          event.dataTransfer.dropEffect = 'copy';\n",
        "          document.body.style.backgroundColor = 'yellow';\n",
        "      }\n",
        "\n",
        "      function on_dragleave (event)\n",
        "      {\n",
        "          event.preventDefault();\n",
        "          document.body.style.backgroundColor = 'initial';\n",
        "      }\n",
        "\n",
        "      function on_drop (event)\n",
        "      {\n",
        "          event.preventDefault();\n",
        "          if (connection.readyState !== WebSocket.OPEN) {return}\n",
        "          document.body.style.backgroundColor = 'initial';\n",
        "          if (avatar!=last) {\n",
        "            if (last==\"1\") {av1_btn.click();}\n",
        "            else if (last==\"2\") {av2_btn.click();}\n",
        "            else {av3_btn.click();}\n",
        "          }\n",
        "          var imageUrl = event.dataTransfer.getData(\"text/html\")||event.dataTransfer.getData(\"url\");\n",
        "          var file = event.dataTransfer.files ? event.dataTransfer.files[0] : null;\n",
        "          if (file) {\n",
        "            console.log('retrieving image from file...');\n",
        "            let reader = new FileReader();\n",
        "            reader.onload = function (event)\n",
        "            {\n",
        "              connection.send('drag' + event.target.result);\n",
        "            };\n",
        "            reader.readAsDataURL(file);\n",
        "          } else if (imageUrl) {\n",
        "            console.log('retrieving image from URL: ' + imageUrl);\n",
        "            connection.send('url'+imageUrl);\n",
        "          }\n",
        "      }\n",
        "      document.body.addEventListener ('dragover',  on_dragover, false);\n",
        "      document.body.addEventListener ('dragleave', on_dragleave, false);\n",
        "      document.body.addEventListener ('drop' ,     on_drop, false);\n",
        "\n",
        "      const div = document.createElement('div');\n",
        "      const div1 = document.createElement('div');\n",
        "      const div2 = document.createElement('div');\n",
        "      div2.style.textAlign = 'right';\n",
        "      div.appendChild(div1);\n",
        "      div.appendChild(div2);\n",
        "      div.style.marginTop = \"16px\";\n",
        "      var display_size = 256;\n",
        "      panel.style.width = (display_size*2+16).toString()+\"px\";\n",
        "      div.style.display= \"flex\";\n",
        "      div.style.justifyContent= \"space-between\";\n",
        "      panel.appendChild(div);\n",
        "      document.body.appendChild(panel);\n",
        "      //video element\n",
        "      const video = document.createElement('video');\n",
        "      video.style.display = 'None';\n",
        "      const stream = await navigator.mediaDevices.getUserMedia({audio: false, video: { width:{min:256} , height: {min:256} , frameRate:24}});\n",
        "      div.appendChild(video);\n",
        "      video.srcObject = stream;\n",
        "      await video.play();\n",
        "\n",
        "      //canvas for display. frame rate is depending on display size and jpeg quality.\n",
        "      const src_canvas = document.createElement('canvas');\n",
        "      src_canvas.height  = display_size;\n",
        "      src_canvas.width = display_size; // * video.videoWidth / video.videoHeight;\n",
        "      const src_canvasCtx = src_canvas.getContext('2d');\n",
        "\n",
        "      src_canvasCtx.translate(src_canvas.width, 0);\n",
        "      src_canvasCtx.scale(-1, 1);\n",
        "      div1.appendChild(src_canvas);\n",
        "\n",
        "      const dst_canvas = document.createElement('canvas');\n",
        "      dst_canvas.width  = src_canvas.width;\n",
        "      dst_canvas.height = src_canvas.height;\n",
        "      const dst_canvasCtx = dst_canvas.getContext('2d');\n",
        "      div2.appendChild(dst_canvas);\n",
        "\n",
        "      const vsld1 = document.createElement('input');\n",
        "      const vsld2 = document.createElement('input');\n",
        "      vsld1.style.marginTop = \"16px\";\n",
        "      vsld2.style.marginTop = \"16px\";\n",
        "      vsld1.type = \"range\";\n",
        "      vsld1.min = \"0\";\n",
        "      vsld1.max = \"0.6\";\n",
        "      vsld1.step = \"0.01\";\n",
        "      vsld1.defaultValue = \"0.2\";\n",
        "      vsld1.style.width = \"95%\";\n",
        "      vsld2.style.width = \"95%\";\n",
        "      vsld2.type = \"range\";\n",
        "      vsld2.min = \"0\";\n",
        "      vsld2.max = \"0.6\";\n",
        "      vsld2.step = \"0.01\";\n",
        "      vsld2.defaultValue = \"0\";\n",
        "      div1.appendChild(vsld1);\n",
        "      div2.appendChild(vsld2);\n",
        "\n",
        "      //exit button\n",
        "      const btn_div = document.createElement('div');\n",
        "      //document.body.appendChild(btn_div);\n",
        "      const exit_btn = document.createElement('button');\n",
        "      exit_btn.innerHTML = '<u>E</u>xit';\n",
        "      var exit_flg = true;\n",
        "      //exit_btn.onclick = function() {exit_flg = false;};\n",
        "      //btn_div.appendChild(exit_btn);\n",
        "\n",
        "      const btn3_div = document.createElement('div');\n",
        "      btn3_div.style.marginTop = \"16px\";\n",
        "      btn3_div.style.display= \"flex\";\n",
        "      btn3_div.style.justifyContent= \"space-between\";\n",
        "      panel.appendChild(btn3_div);\n",
        "\n",
        "      const btn1_div = document.createElement('div');\n",
        "      btn1_div.style.marginTop = \"16px\";\n",
        "      btn1_div.style.display= \"flex\";\n",
        "      btn1_div.style.justifyContent= \"space-between\";\n",
        "      panel.appendChild(btn1_div);\n",
        "\n",
        "      const btn2_div = document.createElement('div');\n",
        "      btn2_div.style.marginTop = \"16px\";\n",
        "      btn2_div.style.display= \"flex\";\n",
        "      btn2_div.style.justifyContent= \"space-between\";\n",
        "      //panel.appendChild(btn2_div);\n",
        "\n",
        "      const btn2b_div = document.createElement('div');\n",
        "      btn2b_div.style.marginTop = \"16px\";\n",
        "      btn2b_div.style.display= \"flex\";\n",
        "      btn2b_div.style.justifyContent= \"space-between\";\n",
        "      panel.appendChild(btn2b_div);\n",
        "\n",
        "      const btn4_div = document.createElement('div');\n",
        "      btn4_div.style.marginTop = \"16px\";\n",
        "      btn4_div.style.display= \"flex\";\n",
        "      btn4_div.style.justifyContent= \"space-between\";\n",
        "      panel.appendChild(btn4_div);\n",
        "\n",
        "      function toggle(btn) {\n",
        "          av1_btn.style.fontWeight='normal';\n",
        "          av2_btn.style.fontWeight='normal';\n",
        "          av3_btn.style.fontWeight='normal';\n",
        "          av4_btn.style.fontWeight='normal';\n",
        "          av5_btn.style.fontWeight='normal';\n",
        "          av6_btn.style.fontWeight='normal';\n",
        "          av7_btn.style.fontWeight='normal';\n",
        "          av8_btn.style.fontWeight='normal';\n",
        "          av9_btn.style.fontWeight='normal';\n",
        "          av10_btn.style.fontWeight='normal';\n",
        "          av11_btn.style.fontWeight='normal';\n",
        "          av12_btn.style.fontWeight='normal';\n",
        "          btn.style.fontWeight='bold';\n",
        "      }\n",
        "\n",
        "      var avatar = \"1\";\n",
        "      var last = avatar;\n",
        "      //avatar1 button\n",
        "      const av1_btn = document.createElement('button');\n",
        "      av1_btn.innerHTML = 'Avatar <u>1</u>';\n",
        "      av1_btn.onclick = function() {avatar = \"1\";last=avatar;toggle(this);};\n",
        "      av1_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"1\";last=avatar;toggle(this);}};\n",
        "      av1_btn.style.width = \"22.5%\";\n",
        "      btn1_div.appendChild(av1_btn);\n",
        "\n",
        "      //avatar2 button\n",
        "      const av2_btn = document.createElement('button');\n",
        "      av2_btn.innerHTML = 'Avatar <u>2</u>';\n",
        "      av2_btn.onclick = function() {avatar = \"2\";last=avatar;toggle(this);};\n",
        "      av2_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"2\";last=avatar;toggle(this);}};\n",
        "      av2_btn.style.width = \"22.5%\";\n",
        "      btn1_div.appendChild(av2_btn);\n",
        "\n",
        "      //avatar3 button\n",
        "      const av3_btn = document.createElement('button');\n",
        "      av3_btn.innerHTML = 'Avatar <u>3</u>';\n",
        "      av3_btn.onclick = function() {avatar = \"3\";last=avatar;toggle(this);};\n",
        "      av3_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"3\";last=avatar;toggle(this);}};\n",
        "      av3_btn.style.width = \"22.5%\";\n",
        "      btn1_div.appendChild(av3_btn);\n",
        "\n",
        "      //random human button\n",
        "      const av4_btn = document.createElement('button');\n",
        "      av4_btn.innerHTML = 'Human (<u>4</u>)';\n",
        "      av4_btn.onclick = function() {avatar = \"4\";toggle(this);};\n",
        "      av4_btn.okeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"4\";toggle(this);}};\n",
        "      av4_btn.style.width = \"22.5%\";\n",
        "      btn1_div.appendChild(av4_btn);\n",
        "\n",
        "      //random man button\n",
        "      const av5_btn = document.createElement('button');\n",
        "      av5_btn.innerHTML = 'Man (<u>5</u>)';\n",
        "      av5_btn.onclick = function() {avatar = \"5\";toggle(this);};\n",
        "      av5_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"5\";toggle(this);}};\n",
        "      av5_btn.style.width = \"22.5%\";\n",
        "      btn2_div.appendChild(av5_btn);\n",
        "\n",
        "      //random woman button\n",
        "      const av6_btn = document.createElement('button');\n",
        "      av6_btn.innerHTML = 'Woman (<u>6</u>)';\n",
        "      av6_btn.onclick = function() {avatar = \"6\";toggle(this);};\n",
        "      av6_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"6\";toggle(this);}};\n",
        "      av6_btn.style.width = \"22.5%\";\n",
        "      btn2_div.appendChild(av6_btn);\n",
        "\n",
        "      //random boy button\n",
        "      const av7_btn = document.createElement('button');\n",
        "      av7_btn.innerHTML = 'Boy (<u>7</u>)';\n",
        "      av7_btn.onclick = function() {avatar = \"7\";toggle(this);};\n",
        "      av7_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"7\";toggle(this);}};\n",
        "      av7_btn.style.width = \"22.5%\";\n",
        "      btn2_div.appendChild(av7_btn);\n",
        "\n",
        "      //random girl button\n",
        "      const av8_btn = document.createElement('button');\n",
        "      av8_btn.innerHTML = 'Girl (<u>8</u>)';\n",
        "      av8_btn.onclick = function() {avatar = \"8\";toggle(this);};\n",
        "      av8_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"8\";toggle(this);}};\n",
        "      av8_btn.style.width = \"22.5%\";\n",
        "      btn2_div.appendChild(av8_btn);\n",
        "\n",
        "      //random waifu button\n",
        "      const av9_btn = document.createElement('button');\n",
        "      av9_btn.innerHTML = 'Waifu (<u>9</u>)';\n",
        "      av9_btn.onclick = function() {avatar = \"9\";toggle(this);};\n",
        "      av9_btn.okeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"9\";toggle(this);}};\n",
        "      av9_btn.style.width = \"22.5%\";\n",
        "      btn2b_div.appendChild(av9_btn);\n",
        "\n",
        "      //random fursona button\n",
        "      const av10_btn = document.createElement('button');\n",
        "      av10_btn.innerHTML = 'Fursona (<u>0</u>)';\n",
        "      av10_btn.onclick = function() {avatar = \"0\";toggle(this);};\n",
        "      av10_btn.okeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"0\";toggle(this);}};\n",
        "      av10_btn.style.width = \"22.5%\";\n",
        "      btn2b_div.appendChild(av10_btn);\n",
        "\n",
        "      //random muppet button\n",
        "      const av11_btn = document.createElement('button');\n",
        "      av11_btn.innerHTML = 'Muppet (<u>-</u>)';\n",
        "      av11_btn.onclick = function() {avatar = \"-\";toggle(this);};\n",
        "      av11_btn.okeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"-\";toggle(this);}};\n",
        "      av11_btn.style.width = \"22.5%\";\n",
        "      btn2b_div.appendChild(av11_btn);\n",
        "\n",
        "      //you button\n",
        "      const av12_btn = document.createElement('button');\n",
        "      av12_btn.innerHTML = 'You (<u>=</u>)';\n",
        "      av12_btn.onclick = function() {avatar = \"=\";toggle(this);};\n",
        "      av12_btn.okeydown = function(e) {if (e.code==13||e.code==32) {avatar = \"=\";toggle(this);}};\n",
        "      av12_btn.style.width = \"22.5%\";\n",
        "      btn2b_div.appendChild(av12_btn);\n",
        "\n",
        "\n",
        "      toggle(av1_btn);\n",
        "\n",
        "      function reset() {\n",
        "          vsld1.value = vsld1.defaultValue;\n",
        "          vsld2.value = vsld2.defaultValue;\n",
        "          sld.value = sld.defaultValue;\n",
        "          alp.value = alp.defaultValue;\n",
        "          msg.value = msg.defaultValue;\n",
        "          auto_btn.checked = auto_btn.defaultChecked;\n",
        "          kp_btn.checked = kp_btn.defaultChecked;\n",
        "          adam_btn.checked = adam_btn.defaultChecked;\n",
        "          relm_btn.checked = relm_btn.defaultChecked;\n",
        "          relj_btn.checked = relj_btn.defaultChecked;\n",
        "          sld_out.innerHTML = parseFloat(sld.value).toFixed(1);\n",
        "          alp_out.innerHTML = parseFloat(alp.value).toFixed(1);\n",
        "          msg_out.innerHTML = msg.value;\n",
        "          real_frame_count = 0;\n",
        "          if (start!=null) {start=performance.now();}\n",
        "          calib_btn.click();\n",
        "      }\n",
        "\n",
        "      document.addEventListener('keydown', function (event) {\n",
        "        if ( event.key == '1' ) { av1_btn.click();  }\n",
        "        else if ( event.key == '2' ) { av2_btn.click();  }\n",
        "        else if ( event.key == '3' ) { av3_btn.click();  }\n",
        "        else if ( event.key == '4' ) { av4_btn.click();  }\n",
        "        else if ( event.key == '5' ) { av5_btn.click();  }\n",
        "        else if ( event.key == '6' ) { av6_btn.click();  }\n",
        "        else if ( event.key == '7' ) { av7_btn.click();  }\n",
        "        else if ( event.key == '8' ) { av8_btn.click();  }\n",
        "        else if ( event.key == '9' ) { av9_btn.click();  }\n",
        "        else if ( event.key == '0' ) { av10_btn.click();  }\n",
        "        else if ( event.key == '-' ) { av11_btn.click();  }\n",
        "        else if ( event.key == '=' ) { av12_btn.click();  }\n",
        "        else if ( event.key.toLowerCase() == 'c' || event.key == 'ב' || event.key == '`' || event.key == ';') { calib_btn.click();  }\n",
        "        else if ( event.key.toLowerCase() == 'r' || event.key == 'ר' || event.code==27 || event.code==8) {reset();          }\n",
        "        else if ( event.key.toLowerCase() == 's' || event.key == 'ד') { adam_btn.click();  }\n",
        "        else if ( event.key.toLowerCase() == 'm' || event.key == 'צ') { relm_btn.click();  }\n",
        "        else if ( event.key.toLowerCase() == 'j' || event.key == 'ח') { relj_btn.click();  }\n",
        "        else if ( event.key.toLowerCase() == 'l' || event.key == 'ך') { kp_btn.click();  }\n",
        "        else if ( event.key.toLowerCase() == 'b' || event.key == 'נ') { alp.value=(parseFloat(alp.value)==0)?\"0.5\":\"0\"; alp_out.innerHTML = \"Alpha blend:&nbsp;&nbsp;\"+parseFloat(alp.value).toFixed(1);}\n",
        "        else if ( event.key.toLowerCase() == 'a' || event.key == 'ש') { auto_btn.click();}\n",
        "      });\n",
        "\n",
        "      //calib button\n",
        "      const calib_btn = document.createElement('button');\n",
        "      calib_btn.innerHTML = '<u>C</u>alibrate (<u>`</u>)';\n",
        "      var calib_flg = \"1\";\n",
        "      calib_btn.style.width = \"48.33%\";\n",
        "      calib_btn.onclick = function() {calib_flg = \"1\";};\n",
        "      calib_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {calib_flg = \"1\";}};\n",
        "      btn3_div.appendChild(calib_btn);\n",
        "      calib_btn.focus();\n",
        "\n",
        "      //auto button\n",
        "      const auto_label = document.createElement('label');\n",
        "      btn3_div.appendChild(auto_label);\n",
        "      const auto_btn = document.createElement('input');\n",
        "      auto_btn.type = \"checkbox\";\n",
        "      auto_btn.defaultChecked = false;\n",
        "      auto_label.style.width = \"22.5%\";\n",
        "      auto_label.innerHTML = '<u>A</u>uto<br>calibrate';\n",
        "      auto_label.style.textAlign = 'center';\n",
        "      auto_btn.style.marginRight = '10px';\n",
        "      auto_label.insertBefore(auto_btn, auto_label.firstChild);\n",
        "\n",
        "      //reset button\n",
        "      const reset_btn = document.createElement('button');\n",
        "      reset_btn.innerHTML = '<u>R</u>eset (<u>ESC</u>/<u>BS</u>)';\n",
        "      reset_btn.onclick = function() {reset();};\n",
        "      reset_btn.onkeydown = function(e) {if (e.code==13||e.code==32) {reset();}};\n",
        "      reset_btn.style.width = \"22.5%\";\n",
        "      btn3_div.appendChild(reset_btn);\n",
        "\n",
        "      //adam button\n",
        "      const adam_label = document.createElement('label');\n",
        "      btn4_div.appendChild(adam_label);\n",
        "      const adam_btn = document.createElement('input');\n",
        "      adam_btn.type = \"checkbox\";\n",
        "      adam_btn.defaultChecked = true;\n",
        "      adam_label.style.width = \"22.5%\";\n",
        "      adam_label.innerHTML = 'Adaptive<br><u>s</u>cale';\n",
        "      adam_label.style.textAlign = 'center';\n",
        "      adam_btn.style.marginRight = '10px';\n",
        "      adam_label.insertBefore(adam_btn, adam_label.firstChild);\n",
        "\n",
        "      //relm button\n",
        "      const relm_label = document.createElement('label');\n",
        "      btn4_div.appendChild(relm_label);\n",
        "      const relm_btn = document.createElement('input');\n",
        "      relm_btn.type = \"checkbox\";\n",
        "      relm_btn.defaultChecked = true;\n",
        "      relm_label.style.width = \"22.5%\";\n",
        "      relm_label.innerHTML = 'Relative<br><u>m</u>ovement';\n",
        "      relm_label.style.textAlign = 'center';\n",
        "      relm_btn.style.marginRight = '10px';\n",
        "      relm_label.insertBefore(relm_btn, relm_label.firstChild);\n",
        "\n",
        "      //relj button\n",
        "      const relj_label = document.createElement('label');\n",
        "      btn4_div.appendChild(relj_label);\n",
        "      const relj_btn = document.createElement('input');\n",
        "      relj_btn.type = \"checkbox\";\n",
        "      relj_btn.defaultChecked = true;\n",
        "      relj_label.style.width = \"22.5%\";\n",
        "      relj_label.innerHTML = 'Relative<br><u>J</u>acobian';\n",
        "      relj_label.style.textAlign = 'center';\n",
        "      relj_btn.style.marginRight = '10px';\n",
        "      relj_label.insertBefore(relj_btn, relj_label.firstChild);\n",
        "\n",
        "      //kp button\n",
        "      const kp_label = document.createElement('label');\n",
        "      btn4_div.appendChild(kp_label);\n",
        "      const kp_btn = document.createElement('input');\n",
        "      kp_btn.type = \"checkbox\";\n",
        "      kp_btn.defaultChecked = false;\n",
        "      kp_label.style.width = \"22.5%\";\n",
        "      kp_label.innerHTML = 'Show<br><u>l</u>andmarks';\n",
        "      kp_label.style.textAlign = 'center';\n",
        "      kp_btn.style.marginRight = '10px';\n",
        "      kp_label.insertBefore(kp_btn, kp_label.firstChild);\n",
        "\n",
        "\n",
        "      //slider\n",
        "      const btm_div = document.createElement('div');\n",
        "      btm_div.style.display= \"flex\";\n",
        "      btm_div.style.justifyContent= \"space-between\";\n",
        "      const btm0_div = document.createElement('div');\n",
        "      const btm1_div = document.createElement('div');\n",
        "      const btm2_div = document.createElement('div');\n",
        "      btm0_div.style.display= \"flex\";\n",
        "      btm0_div.style.flexDirection = \"column\";\n",
        "      btm0_div.style.justifyContent= \"space-around\";\n",
        "      btm1_div.style.display= \"flex\";\n",
        "      btm1_div.style.flexDirection = \"column\";\n",
        "      btm1_div.style.justifyContent= \"space-around\";\n",
        "      btm2_div.style.display= \"flex\";\n",
        "      btm2_div.style.width= \"69%\";\n",
        "      btm2_div.style.textAlign= \"right\";\n",
        "      btm2_div.style.flexDirection = \"column\";\n",
        "      btm2_div.style.justifyContent= \"space-around\";\n",
        "      panel.appendChild(btm_div);\n",
        "      btm_div.appendChild(btm0_div);\n",
        "      btm_div.appendChild(btm1_div);\n",
        "      btm_div.appendChild(btm2_div);\n",
        "\n",
        "      const sld = document.createElement('input');\n",
        "      const sld_out = document.createElement('div');\n",
        "      const sld_text = document.createElement('div');\n",
        "      sld.type = \"range\";\n",
        "      sld.min = \"0.1\";\n",
        "      sld.max = \"5.0\";\n",
        "      sld.step = \"0.1\";\n",
        "      btm_div.style.marginTop = \"16px\";\n",
        "      sld.defaultValue = \"1.0\";\n",
        "      sld_text.innerHTML = \"Exaggeration&nbsp;factor:\";\n",
        "      sld_out.innerHTML = parseFloat(sld.value).toFixed(1);\n",
        "      sld.oninput = function(event) {sld_out.innerHTML = parseFloat(this.value).toFixed(1);};\n",
        "      btm0_div.appendChild(sld_text);\n",
        "      btm1_div.appendChild(sld_out);\n",
        "      btm2_div.appendChild(sld);\n",
        "\n",
        "      //alpha\n",
        "      const alp = document.createElement('input');\n",
        "      const alp_out = document.createElement('div');\n",
        "      const alp_text = document.createElement('div');\n",
        "      alp.type = \"range\";\n",
        "      alp.min = \"0\";\n",
        "      alp.max = \"1\";\n",
        "      alp.step = \"0.1\";\n",
        "      alp.defaultValue = \"0\";\n",
        "      alp.style.marginTop = \"16px\";\n",
        "      alp_out.style.marginTop = \"16px\";\n",
        "      alp_text.style.marginTop = \"16px\";\n",
        "      alp_text.innerHTML = \"Alpha&nbsp;<u>b</u>lend:\";\n",
        "      alp_out.innerHTML = parseFloat(alp.value).toFixed(1);\n",
        "      alp.oninput = function(event) {alp_out.innerHTML = parseFloat(this.value).toFixed(1);};\n",
        "      btm0_div.appendChild(alp_text);\n",
        "      btm1_div.appendChild(alp_out);\n",
        "      btm2_div.appendChild(alp);\n",
        "\n",
        "      //msg\n",
        "      var real_frame_count = 0;\n",
        "      var start = null;\n",
        "      const msg = document.createElement('input');\n",
        "      const msg_out = document.createElement('div');\n",
        "      const msg_text = document.createElement('div');\n",
        "      msg.type = \"range\";\n",
        "      msg.min = \"1\";\n",
        "      msg.max = \"20\";\n",
        "      msg.step = \"1\";\n",
        "      msg.defaultValue = \"6\";\n",
        "      msg.style.marginTop = \"16px\";\n",
        "      msg_out.style.marginTop = \"16px\";\n",
        "      msg_text.style.marginTop = \"16px\";\n",
        "      msg_text.innerHTML = \"Message&nbsp;buffer:\";\n",
        "      msg_out.innerHTML = msg.value;\n",
        "      msg.oninput = function(event) {msg_out.innerHTML = msg.value; real_frame_count = 0; start = null;};\n",
        "      btm0_div.appendChild(msg_text);\n",
        "      btm1_div.appendChild(msg_out);\n",
        "      btm2_div.appendChild(msg);\n",
        "\n",
        "      //log\n",
        "      let jsLog = function(abc) {\n",
        "        document.querySelector(\"#output-area\").appendChild(document.createTextNode(`${abc} `));\n",
        "      };\n",
        "      // Resize the output to fit the video element.\n",
        "      google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);\n",
        "\n",
        "      //for websocket connection.\n",
        "      var connection = 0;\n",
        "      var in_transit_count = 0;\n",
        "      var payload_size = 0;\n",
        "\n",
        "      var socketOnOpen = function(e) {\n",
        "        console.log(\"websocket open\");\n",
        "        jsLog(\" Websocket open. \");\n",
        "        start=performance.now();\n",
        "      }\n",
        "\n",
        "      var socketOnMessage = function(e) {\n",
        "        in_transit_count-=1;\n",
        "        var image = new Image();\n",
        "        image.src = e.data;\n",
        "        //image.onload = function() {dst_canvasCtx.drawImage(image,parseInt(vsld2.value), parseInt(vsld2.value), display_size-2*parseInt(vsld2.value), display_size-2*parseInt(vsld2.value),0,0, display_size, display_size);};\n",
        "        image.onload = function() {dst_canvasCtx.drawImage(image,0,0); real_frame_count+=1;};\n",
        "        if (start) {fps.innerHTML = \"payload=\" + payload_size + \" fps=\"+(real_frame_count*1000/(performance.now()-start)).toFixed(1)+\" --- Drag & drop local/web images to upload new avatars!\";}\n",
        "      };\n",
        "\n",
        "      var socketOnClose = function(e) {\n",
        "        console.log('websocket disconnected - waiting for connection');\n",
        "        websocketWaiter();\n",
        "      };\n",
        "\n",
        "      function websocketWaiter() {\n",
        "        setTimeout(function() {\n",
        "          connection = new WebSocket(url);\n",
        "          connection.onopen = socketOnOpen;\n",
        "          connection.onmessage = socketOnMessage;\n",
        "          connection.onclose = socketOnClose;\n",
        "        }, 1000);\n",
        "      };\n",
        "\n",
        "      websocketWaiter();\n",
        "      jsLog(\"camera=\"+video.videoWidth+\"x\"+video.videoHeight+\".\");\n",
        "\n",
        "      // loop\n",
        "      async function _canvasUpdate() {\n",
        "        var s = Math.min(video.videoWidth, video.videoHeight) * (1-vsld1.value); // adapted from https://github.com/alievk/avatarify\n",
        "        src_canvasCtx.drawImage(video,Math.round(video.videoWidth-s)/2, Math.round(video.videoHeight-s)/2, Math.round(s), Math.round(s),0,0, display_size, display_size);\n",
        "\n",
        "        if (connection.readyState === WebSocket.OPEN && in_transit_count<parseInt(msg.value))\n",
        "        {\n",
        "          in_transit_count+=1;\n",
        "          var img = src_canvas.toDataURL('image/jpeg', quality);\n",
        "          var sld_str = parseFloat(sld.value).toFixed(1);\n",
        "          var alpha = parseFloat(alp.value).toFixed(1);\n",
        "          var crop = parseFloat(vsld2.value).toFixed(2);\n",
        "          var auto_flg = (auto_btn.checked)?\"1\":\"0\";\n",
        "          var adam_flg = (adam_btn.checked)?\"1\":\"0\";\n",
        "          var relm_flg = (relm_btn.checked)?\"1\":\"0\";\n",
        "          var relj_flg = (relj_btn.checked)?\"1\":\"0\";\n",
        "          var kp_flg = (kp_btn.checked)?\"1\":\"0\";\n",
        "          var payload = calib_flg+avatar+sld_str+alpha+crop+auto_flg+adam_flg+relm_flg+relj_flg+kp_flg+img;\n",
        "          payload_size = payload.length;\n",
        "          connection.send(payload);\n",
        "          avatar=\"`\";\n",
        "          calib_flg = \"0\";\n",
        "        }\n",
        "        if (exit_flg) {\n",
        "            requestAnimationFrame(_canvasUpdate);\n",
        "        }\n",
        "        else {\n",
        "          stream.getVideoTracks()[0].stop();\n",
        "          connection.close();\n",
        "        }\n",
        "      }\n",
        "      _canvasUpdate();\n",
        "    }\n",
        "    ''')\n",
        "  display(js)\n",
        "  eval_js('useCam(\"{}\", {})'.format(url, quality))\n",
        "\n",
        "print(machine)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PgKavCGCeDJh",
        "cellView": "form"
      },
      "source": [
        "#@title Get the Avatar images from the web\n",
        "#@markdown 1. You can change the URLs to your **own** stuff!\n",
        "#@markdown 2. Alternatively, you can upload **local** files in the next cell\n",
        "#@markdown 3. You can later also **drag and drop** images on the GUI to upload new avatars!\n",
        "\n",
        "image1_url = 'https://www.beat.com.au/wp-content/uploads/2018/05/ilana.jpg' #@param {type:\"string\"}\n",
        "image2_url = 'https://img.zeit.de/zeit-magazin/2017-03/marina-abramovic-performance-kuenstlerin-the-cleaner-monografie-oevre-bilder/marina-abramovic-performance-kuenstlerin-the-cleaner-monografie-oevre-10.jpg/imagegroup/original__620x620__desktop' #@param {type:\"string\"}\n",
        "image3_url = 'https://i.pinimg.com/originals/27/86/58/2786580674b7c9b20ead54f53bf0be9e.jpg' #@param {type:\"string\"}\n",
        "\n",
        "if image1_url:\n",
        "  !wget \"$image1_url\" -O /content/image1\n",
        "\n",
        "if image2_url:\n",
        "  !wget \"$image2_url\" -O /content/image2\n",
        "\n",
        "if image3_url:\n",
        "  !wget \"$image3_url\" -O /content/image3"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "tKKtxUcyrhSZ",
        "cellView": "form"
      },
      "source": [
        "#@title Optionally upload local Avatar images { run: \"auto\" }\n",
        "#@markdown Instructions: mark the checkbox + press play if it doesn't start by itself + click the upload button that will appear below\n",
        "\n",
        "manually_upload_images = False #@param {type:\"boolean\"}\n",
        "if manually_upload_images:\n",
        "  from google.colab import files\n",
        "  import shutil\n",
        "\n",
        "  %cd /content/sample_data\n",
        "  try:\n",
        "    uploaded = files.upload()\n",
        "  except Exception as e:\n",
        "    %cd /content\n",
        "    raise e\n",
        "\n",
        "  for i,fn in enumerate(uploaded, start=1):\n",
        "    shutil.move('/content/sample_data/'+fn, '/content/image%d'%i)\n",
        "    if i==3:\n",
        "      break\n",
        "  %cd /content\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DsbBpNw5nu0l",
        "cellView": "form"
      },
      "source": [
        "#@title Prepare assets\n",
        "center_image1_to_head = True #@param {type:\"boolean\"}\n",
        "crop_image1_to_head = False #@param {type:\"boolean\"}\n",
        "image1_crop_expansion_factor = 2.5 #@param {type:\"number\"}\n",
        "\n",
        "center_image2_to_head = True #@param {type:\"boolean\"}\n",
        "crop_image2_to_head = True #@param {type:\"boolean\"}\n",
        "image2_crop_expansion_factor = 2.5 #@param {type:\"number\"}\n",
        "\n",
        "center_image3_to_head = True #@param {type:\"boolean\"}\n",
        "crop_image3_to_head = False #@param {type:\"boolean\"}\n",
        "image3_crop_expansion_factor = 2.5 #@param {type:\"number\"}\n",
        "\n",
        "center_image_to_head = (center_image1_to_head, center_image2_to_head, center_image3_to_head)\n",
        "crop_image_to_head = (crop_image1_to_head, crop_image2_to_head, crop_image3_to_head)\n",
        "image_crop_expansion_factor = (image1_crop_expansion_factor, image2_crop_expansion_factor, image3_crop_expansion_factor)\n",
        "\n",
        "import imageio\n",
        "import numpy as np\n",
        "from google.colab.patches import cv2_imshow\n",
        "from skimage.transform import resize\n",
        "\n",
        "import face_alignment\n",
        "import torch\n",
        "\n",
        "if not hasattr(face_alignment.utils, '_original_transform'):\n",
        "    face_alignment.utils._original_transform = face_alignment.utils.transform\n",
        "\n",
        "def patched_transform(point, center, scale, resolution, invert=False):\n",
        "    return face_alignment.utils._original_transform(\n",
        "        point, center, torch.tensor(scale, dtype=torch.float32), torch.tensor(resolution, dtype=torch.float32), invert)\n",
        "\n",
        "face_alignment.utils.transform = patched_transform\n",
        "\n",
        "try:\n",
        "  fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=True,\n",
        "                                      device='cuda')\n",
        "except Exception:\n",
        "  !rm -rf /root/.cache/torch/hub/checkpoints/s3fd-619a316812.pth\n",
        "  !rm -rf /root/.cache/torch/hub/checkpoints/2DFAN4-11f355bf06.pth.tar\n",
        "  fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=True,\n",
        "                                      device='cuda')\n",
        "\n",
        "def create_bounding_box(target_landmarks, expansion_factor=1):\n",
        "    target_landmarks = np.array(target_landmarks)\n",
        "    x_y_min = target_landmarks.reshape(-1, 68, 2).min(axis=1)\n",
        "    x_y_max = target_landmarks.reshape(-1, 68, 2).max(axis=1)\n",
        "    expansion_factor = (expansion_factor-1)/2\n",
        "    bb_expansion_x = (x_y_max[:, 0] - x_y_min[:, 0]) * expansion_factor\n",
        "    bb_expansion_y = (x_y_max[:, 1] - x_y_min[:, 1]) * expansion_factor\n",
        "    x_y_min[:, 0] -= bb_expansion_x\n",
        "    x_y_max[:, 0] += bb_expansion_x\n",
        "    x_y_min[:, 1] -= bb_expansion_y\n",
        "    x_y_max[:, 1] += bb_expansion_y\n",
        "    return np.hstack((x_y_min, x_y_max-x_y_min))\n",
        "\n",
        "def fix_dims(im):\n",
        "    if im.ndim == 2:\n",
        "        im = np.tile(im[..., None], [1, 1, 3])\n",
        "    return im[...,:3]\n",
        "\n",
        "def get_crop(im, center_face=True, crop_face=True, expansion_factor=1, landmarks=None):\n",
        "    im = fix_dims(im)\n",
        "    if (center_face or crop_face) and not landmarks:\n",
        "        landmarks = fa.get_landmarks_from_image(im)\n",
        "    if (center_face or crop_face) and landmarks:\n",
        "        rects = create_bounding_box(landmarks, expansion_factor=expansion_factor)\n",
        "        x0,y0,w,h = sorted(rects, key=lambda x: x[2]*x[3])[-1]\n",
        "        if crop_face:\n",
        "            s = max(h, w)\n",
        "            x0 += (w-s)//2\n",
        "            x1 = x0 + s\n",
        "            y0 += (h-s)//2\n",
        "            y1 = y0 + s\n",
        "        else:\n",
        "            img_h,img_w = im.shape[:2]\n",
        "            img_s = min(img_h,img_w)\n",
        "            x0 = min(max(0, x0+(w-img_s)//2), img_w-img_s)\n",
        "            x1 = x0 + img_s\n",
        "            y0 = min(max(0, y0+(h-img_s)//2), img_h-img_s)\n",
        "            y1 = y0 + img_s\n",
        "    else:\n",
        "        h,w = im.shape[:2]\n",
        "        s = min(h,w)\n",
        "        x0 = (w-s)//2\n",
        "        x1 = x0 + s\n",
        "        y0 = (h-s)//2\n",
        "        y1 = y0 + s\n",
        "    return int(x0),int(x1),int(y0),int(y1)\n",
        "\n",
        "def pad_crop_resize(im, x0=None, x1=None, y0=None, y1=None, new_h=256, new_w=256):\n",
        "    im = fix_dims(im)\n",
        "    h,w = im.shape[:2]\n",
        "    if x0 is None:\n",
        "      x0 = 0\n",
        "    if x1 is None:\n",
        "      x1 = w\n",
        "    if y0 is None:\n",
        "      y0 = 0\n",
        "    if y1 is None:\n",
        "      y1 = h\n",
        "    if x0<0 or x1>w or y0<0 or y1>h:\n",
        "        im = np.pad(im, pad_width=[(max(-y0,0),max(y1-h,0)),(max(-x0,0),max(x1-w,0)),(0,0)], mode='edge')\n",
        "    im = im[max(y0,0):y1-min(y0,0),max(x0,0):x1-min(x0,0)]\n",
        "    im = resize(im, (im.shape[0] if new_h is None else new_h, im.shape[1] if new_w is None else new_w))\n",
        "    return im\n",
        "\n",
        "source_image = []\n",
        "orig_image = []\n",
        "for i in range(3):\n",
        "    img = imageio.imread('/content/image%d'%(i+1))\n",
        "    img = pad_crop_resize(img, *get_crop(img, center_face=center_image_to_head[i], crop_face=crop_image_to_head[i], expansion_factor=image_crop_expansion_factor[i]), new_h=None, new_w=None)\n",
        "    orig_image.append(img)\n",
        "    source_image.append(resize(img, (256,256)))\n",
        "num_avatars = len(source_image)\n",
        "\n",
        "cv2_imshow(np.hstack(source_image)[...,::-1]*255)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "g8qFmqu1J7-j",
        "cellView": "form"
      },
      "source": [
        "#@title Go live!\n",
        "#@markdown Kindly approve camera access if asked. If it seems stuck for a long time - click stop and play this cell again.\n",
        "tunnel = 'argo' #@param ['argo']\n",
        "# removed ngrok as it now requires authentication\n",
        "\n",
        "import requests\n",
        "import re\n",
        "\n",
        "!pkill -f ngrok\n",
        "!pkill -f cloudflared\n",
        "try:\n",
        "  _pool.terminate()\n",
        "except:\n",
        "  pass\n",
        "try:\n",
        "  save_socket.close()\n",
        "except:\n",
        "  pass\n",
        "try:\n",
        "  server.shutdown()\n",
        "except:\n",
        "  pass\n",
        "\n",
        "port = 6006\n",
        "if tunnel=='ngrok':\n",
        "  !nohup /content/ngrok http --inspect=false $port &\n",
        "elif tunnel=='argo':\n",
        "  !nohup /content/cloudflared tunnel --url http://localhost:$port --metrics localhost:49589 &\n",
        "\n",
        "from time import time, sleep\n",
        "import json\n",
        "ngrok_url = None\n",
        "while not ngrok_url:\n",
        "  try:\n",
        "    if tunnel=='ngrok':\n",
        "      ngrok_json = !curl http://localhost:4040/api/tunnels\n",
        "      ngrok_url = json.loads(ngrok_json[0])['tunnels'][0]['public_url'].split('://',1)[-1]\n",
        "    elif tunnel=='argo':\n",
        "      argo_metrics = requests.get(\"http://localhost:49589/metrics\").text\n",
        "      ngrok_url = re.search('cloudflared_tunnel_user_hostnames_counts{userHostname=\"https://(.+?)\"}', argo_metrics).group(1)\n",
        "  except Exception as e:\n",
        "    print('Trying to connect tunnel...', e)\n",
        "    sleep(1)\n",
        "from IPython.display import clear_output\n",
        "clear_output()\n",
        "ngrok_url = 'wss://'+ngrok_url\n",
        "print(ngrok_url)\n",
        "\n",
        "%cd /content/first-order-model\n",
        "\n",
        "from demo import load_checkpoints\n",
        "generator, kp_detector = load_checkpoints(config_path='/content/first-order-model/config/vox-adv-256.yaml',\n",
        "                            checkpoint_path='/content/vox-adv-cpk.pth.tar')\n",
        "\n",
        "\n",
        "from scipy.spatial import ConvexHull\n",
        "def normalize_kp(kp):\n",
        "    kp = kp - kp.mean(axis=0, keepdims=True)\n",
        "    area = ConvexHull(kp[:, :2]).volume\n",
        "    area = np.sqrt(area)\n",
        "    kp[:, :2] = kp[:, :2] / area\n",
        "    return kp\n",
        "\n",
        "import torch\n",
        "from skimage import img_as_ubyte\n",
        "import cv2\n",
        "import bottle\n",
        "import gevent\n",
        "from bottle.ext.websocket import GeventWebSocketServer\n",
        "from bottle.ext.websocket import websocket\n",
        "from multiprocessing import Pool\n",
        "from PIL import Image\n",
        "import contextlib\n",
        "from io import BytesIO, StringIO\n",
        "import base64\n",
        "from logger import Visualizer\n",
        "vis = Visualizer(kp_size=3, colormap='gist_rainbow')\n",
        "\n",
        "def norm_source(i,crop=0):\n",
        "    with torch.no_grad():\n",
        "        img = source_image[i]\n",
        "        if crop:\n",
        "            img = orig_image[i]\n",
        "            h,w = img.shape[:2]\n",
        "            s = min(h,w) * (1-crop) # adapted from https://github.com/alievk/avatarify\n",
        "            img = resize(img[int((h-s)/2):int((h+s)/2),int((w-s)/2):int((w+s)/2)], (256,256))\n",
        "\n",
        "        source[i] = torch.tensor(img[np.newaxis].astype(np.float32)).permute(0, 3, 1, 2).cuda()\n",
        "        kp_source[i] = kp_detector(source[i])\n",
        "        source_area[i] = ConvexHull(kp_source[i]['value'][0].data.cpu().numpy()).volume\n",
        "\n",
        "gen_urls = [\"https://thispersondoesnotexist.com/\",\n",
        "           \"https://fakeface.rest/face/view?gender=male&minimum_age=18\",\n",
        "           \"https://fakeface.rest/face/view?gender=female&minimum_age=18\",\n",
        "           \"https://fakeface.rest/face/view?gender=male&maximum_age=17\",\n",
        "           \"https://fakeface.rest/face/view?gender=female&maximum_age=17\",\n",
        "           \"https://www.thiswaifudoesnotexist.net/example-\",\n",
        "           \"https://thisfursonadoesnotexist.com/v2/jpgs-2x/seed\",\n",
        "           \"https://eyalgruss.com/thismuppetdoesnotexist/seed\"]\n",
        "\n",
        "if len(orig_image)==num_avatars:\n",
        "    orig_image += [None]*(len(gen_urls)+1)\n",
        "\n",
        "if len(source_image)==num_avatars:\n",
        "    source_image += [None]*(len(gen_urls)+1)\n",
        "\n",
        "def load_stylegan_avatar(avatar, crop=0): # adapted from https://github.com/alievk/avatarify\n",
        "    url = gen_urls[avatar-num_avatars]\n",
        "    if url.endswith('example-'):\n",
        "      url += '%d.jpg'%np.random.randint(10000,100000)\n",
        "    elif url.endswith('seed'):\n",
        "      url += '%05d.jpg'%np.random.randint(100000)\n",
        "    if url.startswith('https://fakeface.rest'):\n",
        "      return\n",
        "    r = requests.get(url, headers={'User-Agent': \"My User Agent 1.0\"}).content\n",
        "    image = np.frombuffer(r, np.uint8)\n",
        "    image = cv2.imdecode(image, cv2.IMREAD_COLOR)\n",
        "    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n",
        "\n",
        "    orig_image[avatar] = image\n",
        "    source_image[avatar] = resize(image, (256, 256))\n",
        "\n",
        "    norm_source(avatar, crop=crop)\n",
        "\n",
        "source = [None]*len(orig_image)\n",
        "kp_source = [None]*len(orig_image)\n",
        "source_area = [None]*len(orig_image)\n",
        "have_gen = [False]*len(gen_urls)\n",
        "crops = [0]*len(orig_image)\n",
        "for i in range(len(orig_image)-1):\n",
        "    if i<num_avatars:\n",
        "        norm_source(i)\n",
        "    else:\n",
        "        try:\n",
        "            load_stylegan_avatar(i)\n",
        "            have_gen[i-num_avatars] = True\n",
        "        except Exception as e:\n",
        "            print(e)\n",
        "\n",
        "def full_normalize_kp(kp_driving, driving_area, kp_driving_initial, adapt_movement_scale=False,\n",
        "                 use_relative_movement=False, use_relative_jacobian=False, exaggerate_factor=1):\n",
        "    if adapt_movement_scale:\n",
        "        adapt_movement_scale = np.sqrt(source_area[avatar]) / np.sqrt(driving_area)\n",
        "    else:\n",
        "        adapt_movement_scale = 1\n",
        "\n",
        "    kp_new = {k: v for k, v in kp_driving.items()}\n",
        "\n",
        "    if use_relative_movement:\n",
        "        kp_value_diff = (kp_driving['value'] - kp_driving_initial['value'])\n",
        "        kp_value_diff *= adapt_movement_scale * exaggerate_factor\n",
        "        kp_new['value'] = kp_value_diff + kp_source[avatar]['value']\n",
        "\n",
        "        if use_relative_jacobian:\n",
        "            jacobian_diff = torch.matmul(kp_driving['jacobian'], torch.inverse(kp_driving_initial['jacobian']))\n",
        "            kp_new['jacobian'] = torch.matmul(jacobian_diff, kp_source[avatar]['jacobian'])\n",
        "\n",
        "    return kp_new\n",
        "\n",
        "\n",
        "kp_driving_initial = None\n",
        "driving_area = None\n",
        "def make_animation(driving_frame, adapt_movement_scale=False, use_relative_movement=False, use_relative_jacobian=False, exaggerate_factor=1, reset=False, auto=False):\n",
        "\n",
        "    global kp_driving_initial, driving_area\n",
        "\n",
        "    with torch.no_grad():\n",
        "        driving_frame = torch.tensor(driving_frame[np.newaxis].astype(np.float32)).permute(0, 3, 1, 2).cuda()\n",
        "\n",
        "        kp_driving = kp_detector(driving_frame)\n",
        "\n",
        "        if auto and kp_driving_initial is not None and not reset:\n",
        "            new_dist = ((kp_source[avatar]['value'] - kp_driving['value']) ** 2).sum().data.cpu().numpy()\n",
        "            old_dist = ((kp_source[avatar]['value'] - kp_driving_initial['value']) ** 2).sum().data.cpu().numpy()\n",
        "        if kp_driving_initial is None or reset or auto and new_dist<old_dist:\n",
        "            kp_driving_initial = kp_driving\n",
        "            driving_area = ConvexHull(kp_driving_initial['value'][0].data.cpu().numpy()).volume\n",
        "\n",
        "        kp_norm = full_normalize_kp(kp_driving=kp_driving, driving_area=driving_area,\n",
        "                                kp_driving_initial=kp_driving_initial, adapt_movement_scale=adapt_movement_scale, use_relative_movement=use_relative_movement,\n",
        "                                use_relative_jacobian=use_relative_jacobian, exaggerate_factor=exaggerate_factor)\n",
        "        out = generator(source[avatar], kp_source=kp_source[avatar], kp_driving=kp_norm)\n",
        "\n",
        "        return np.transpose(out['prediction'].data.cpu().numpy(), [0, 2, 3, 1])[0]\n",
        "\n",
        "avatar = -1\n",
        "anti_aliasing = False\n",
        "save_socket = None\n",
        "socket = bottle.Bottle()\n",
        "@socket.route('/', apply=[websocket])\n",
        "def wsbin(ws):\n",
        "    global avatar, save_socket, have_gen\n",
        "    save_socket = ws\n",
        "    reset = True\n",
        "    new_image = None\n",
        "    wait_start = time()\n",
        "    while True:\n",
        "        try:\n",
        "            frame_start = time()\n",
        "            img_str = ws.receive()\n",
        "            t1 = time()-frame_start\n",
        "\n",
        "            if img_str is not None and (img_str.startswith('drag') or img_str.startswith('url')):\n",
        "                if img_str.startswith('url'):\n",
        "                  img_str = img_str[3:].split('<img ',1)[-1]\n",
        "                  if 'src=\"' in img_str:\n",
        "                    img_str = img_str.split('src=\"',1)[-1]\n",
        "                  else:\n",
        "                    img_str = img_str.split('href=\"',1)[-1]\n",
        "                  img_str = img_str.split('\"',1)[0]\n",
        "                if 'data:image/' not in img_str:\n",
        "                  get_image = requests.get(img_str, headers={'User-Agent': \"My User Agent 1.0\"}).content\n",
        "                else:\n",
        "                  get_image = base64.b64decode(img_str.split(',')[1])#, validate=True)\n",
        "                get_image = Image.open(BytesIO(get_image))\n",
        "                new_image = np.array(get_image)\n",
        "                continue\n",
        "\n",
        "            start = time()\n",
        "            decimg = base64.b64decode(img_str[17:].split(',')[1])#, validate=True)\n",
        "            decimg = Image.open(BytesIO(decimg))\n",
        "            decimg = (np.array(decimg)/255).astype(np.float32)\n",
        "            t2 = time()-start\n",
        "\n",
        "            new_crop = float(img_str[8:12])\n",
        "\n",
        "            reset |= img_str[0]==\"1\"\n",
        "\n",
        "            if img_str[1]==\"`\":\n",
        "                new_avatar = -1\n",
        "            elif img_str[1]==\"0\":\n",
        "                new_avatar = 9\n",
        "            elif img_str[1]==\"-\":\n",
        "                new_avatar = 10\n",
        "            elif img_str[1]==\"=\":\n",
        "                new_avatar = 11\n",
        "            else:\n",
        "                new_avatar = int(img_str[1])-1\n",
        "            if new_avatar>=0:\n",
        "                if new_avatar==num_avatars+len(gen_urls):\n",
        "                    orig_image[new_avatar] = decimg\n",
        "                    source_image[new_avatar] = decimg #resize(decimg, (256, 256))\n",
        "                elif new_avatar>=num_avatars:\n",
        "                    if have_gen[new_avatar-num_avatars]:\n",
        "                        have_gen[new_avatar-num_avatars]=False\n",
        "                    else:\n",
        "                        if new_crop != crops[new_avatar]:\n",
        "                            crops[new_avatar] = new_crop\n",
        "                        load_stylegan_avatar(new_avatar, crop=crops[new_avatar])\n",
        "                avatar = new_avatar\n",
        "                reset = True\n",
        "\n",
        "            if new_image is not None and avatar<num_avatars:\n",
        "                new_image = pad_crop_resize(new_image, *get_crop(new_image, center_face=True, crop_face=True, expansion_factor=2.5), new_h=None, new_w=None)\n",
        "                orig_image[avatar] = new_image\n",
        "                source_image[avatar] = resize(new_image,(256,256))\n",
        "                reset = True\n",
        "\n",
        "            exaggerate_factor = float(img_str[2:5])\n",
        "            alpha = float(img_str[5:8])\n",
        "            auto = int(img_str[12])\n",
        "            adapt_movement_scale = int(img_str[13])\n",
        "            use_relative_movement = int(img_str[14])\n",
        "            use_relative_jacobian = int(img_str[15])\n",
        "            show_kp = int(img_str[16])\n",
        "            if new_crop != crops[avatar] or avatar==num_avatars+len(gen_urls) or new_image is not None:\n",
        "                new_image = None\n",
        "                crops[avatar] = new_crop\n",
        "                norm_source(avatar,crop=crops[avatar])\n",
        "\n",
        "            #h,w = decimg.shape[:2]\n",
        "            #s=min(h,w)\n",
        "            #decimg = resize(decimg[(h-s)//2:(h+s)//2,(w-s)//2:(w+s)//2], (256, 256), anti_aliasing=anti_aliasing)[..., :3]\n",
        "\n",
        "            start = time()\n",
        "            out_img = make_animation(decimg, adapt_movement_scale=adapt_movement_scale, use_relative_movement=use_relative_movement,\n",
        "                                   use_relative_jacobian=use_relative_jacobian, exaggerate_factor=exaggerate_factor, reset=reset, auto=auto)\n",
        "            t3 = time()-start\n",
        "            reset = False\n",
        "\n",
        "            out_img = np.clip(out_img, 0, 1)\n",
        "\n",
        "            if show_kp:\n",
        "                if alpha>0:\n",
        "\n",
        "                  with contextlib.redirect_stdout(StringIO()):\n",
        "                      kp_source = fa.get_landmarks(255 * decimg)\n",
        "                  if kp_source:\n",
        "                    spatial_size = np.array(decimg.shape[:2][::-1])[np.newaxis]\n",
        "                    decimg = vis.draw_image_with_kp(decimg, kp_source[0] * 2 / spatial_size - 1)\n",
        "                with contextlib.redirect_stdout(StringIO()):\n",
        "                    kp_driver = fa.get_landmarks(255 * out_img)\n",
        "                if kp_driver:\n",
        "                    spatial_size = np.array(out_img.shape[:2][::-1])[np.newaxis]\n",
        "                    out_img = vis.draw_image_with_kp(out_img, kp_driver[0] * 2 / spatial_size - 1)\n",
        "\n",
        "            if alpha:\n",
        "              out_img = cv2.addWeighted(out_img, 1-alpha, decimg, alpha, 0)\n",
        "\n",
        "            out_img = (out_img * 255).astype(np.uint8)\n",
        "\n",
        "            #encode to string\n",
        "            start = time()\n",
        "            _, encimg = cv2.imencode(\".jpg\", out_img[...,::-1], [int(cv2.IMWRITE_JPEG_QUALITY), 80])\n",
        "            rep_str = encimg.tostring()\n",
        "            rep_str = \"data:image/jpeg;base64,\" + base64.b64encode(rep_str).decode('utf-8')\n",
        "            t4 = time()-start\n",
        "\n",
        "            start = time()\n",
        "            ws.send(rep_str)\n",
        "            t5 = time()-start\n",
        "            tsum = t1+t2+t3+t4+t5\n",
        "            tframe = time()-frame_start\n",
        "            twait = frame_start-wait_start\n",
        "            tcycle = time()-wait_start\n",
        "            #print('receive=%d decode=%d animate=%d encode=%d send=%d sum=%d total=%d wait=%d sum=%d total=%d'%(t1*1000,t2*1000,t3*1000,t4*1000,t5*1000,tsum*1000,tframe*1000,twait*1000,(t6+t0)*1000,tcycle*1000))\n",
        "            wait_start = time()\n",
        "        except Exception as e:\n",
        "            #raise e\n",
        "            pass\n",
        "            #print(e)\n",
        "\n",
        "import logging\n",
        "from bottle import ServerAdapter\n",
        "from gevent import pywsgi\n",
        "from geventwebsocket.handler import WebSocketHandler\n",
        "from geventwebsocket.logging import create_logger\n",
        "\n",
        "class MyGeventWebSocketServer(ServerAdapter):\n",
        "    def run(self, handler):\n",
        "        server = pywsgi.WSGIServer((self.host, self.port), handler, handler_class=WebSocketHandler)\n",
        "\n",
        "        if not self.quiet:\n",
        "            server.logger = create_logger('geventwebsocket.logging')\n",
        "            server.logger.setLevel(logging.INFO)\n",
        "            server.logger.addHandler(logging.StreamHandler())\n",
        "\n",
        "        self.server = server\n",
        "        server.serve_forever()\n",
        "\n",
        "    def shutdown(self):\n",
        "        self.server.stop()\n",
        "        self.server.close()\n",
        "\n",
        "if __name__ == '__main__':\n",
        "    # prepare multiprocess\n",
        "    _pool = Pool(processes=2)\n",
        "    _pool.apply_async(use_cam, (ngrok_url, 0.8))\n",
        "    print(machine)\n",
        "    server = MyGeventWebSocketServer(host='0.0.0.0', port=port)\n",
        "    from IPython.utils import io\n",
        "    with io.capture_output() as captured:\n",
        "        socket.run(server=server)"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}