{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "incomplete webrtc fomm-live.ipynb",
      "private_outputs": true,
      "provenance": [],
      "machine_shape": "hm",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/eyaler/avatars4all/blob/master/incomplete_webrtc_fomm_live.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9duzzorgTWLt"
      },
      "source": [
        "# Demo for paper \"First Order Motion Model for Image Animation\"\n",
        "\n",
        "## **Live webcam in the browser**\n",
        "\n",
        "### Made just a little bit more accessible by Eyal Gruss (eyalgruss@gmail.com)\n",
        "\n",
        "##### Original project: https://aliaksandrsiarohin.github.io/first-order-model-website\n",
        "\n",
        "##### Original notebook: https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb\n",
        "\n",
        "##### Faceswap notebook: https://colab.research.google.com/github/AliaksandrSiarohin/motion-cosegmentation/blob/master/part_swap.ipynb\n",
        "\n",
        "##### Notebook with video enhancement: https://colab.research.google.com/github/tg-bomze/Face-Image-Motion-Model/blob/master/Face_Image_Motion_Model_(Photo_2_Video)_Eng.ipynb\n",
        "\n",
        "##### Avatarify - a live vesrsion (requires local installation): https://github.com/alievk/avatarify\n",
        "\n",
        "##### This live Colab solution is heavily based on the WebRTC implementation: https://github.com/thefonseca/colabrtc, https://github.com/aiortc/aiortc\n",
        "\n",
        "##### Other WebRTC implementations: https://github.com/l4rz/first-order-model/tree/master/webrtc, https://gist.github.com/myagues/aac0c597f8ad0fa7ebe7d017b0c5603b\n",
        "\n",
        "#### **Stuff I made**:\n",
        "##### Avatars4all repository: https://github.com/eyaler/avatars4all\n",
        "##### Notebook for talking head model: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/fomm_bibi.ipynb\n",
        "##### Notebook for full body models: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/fomm_fufu.ipynb\n",
        "##### Notebook for live webcam in the browser: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/fomm_live.ipynb\n",
        "##### Notebook for Wav2Lip audio based lip syncing: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/melaflefon.ipynb\n",
        "##### List of more generative tools: https://j.mp/generativetools"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XadNYjWOJ1cw",
        "cellView": "form"
      },
      "source": [
        "#@title Setup\n",
        "#@markdown For best performance make sure the output shows Tesla P100 or V100. Otherwise you can do: Runtime -> Reset all runtimes\n",
        "\n",
        "machine = !nvidia-smi -L\n",
        "print(machine)\n",
        "\n",
        "%cd /content\n",
        "!git clone --depth 1 https://github.com/eyaler/first-order-model\n",
        "!wget --no-check-certificate -nc https://openavatarify.s3.amazonaws.com/weights/vox-adv-cpk.pth.tar -P /content\n",
        "!wget --no-check-certificate -nc https://eyalgruss.com/fomm/vox-adv-cpk.pth.tar\n",
        "\n",
        "!mkdir -p /root/.cache/torch/hub/checkpoints\n",
        "%cd /root/.cache/torch/hub/checkpoints\n",
        "!wget --no-check-certificate -nc https://eyalgruss.com/fomm/s3fd-619a316812.pth\n",
        "!wget --no-check-certificate -nc https://eyalgruss.com/fomm/2DFAN4-11f355bf06.pth.tar\n",
        "%cd /content\n",
        "\n",
        "!pip install imageio==2.9.0\n",
        "!pip install git+https://github.com/1adrianb/face-alignment@v1.0.1\n",
        "\n",
        "!git clone -n https://github.com/thefonseca/colabrtc\n",
        "%cd /content/colabrtc\n",
        "!git checkout 90d14e0\n",
        "!pip install fire\n",
        "!pip install av\n",
        "!pip install aiortc\n",
        "!pip install nest_asyncio\n",
        "\n",
        "import sys\n",
        "sys.path.extend(['/content/colabrtc/colabrtc','/content/first-order-model'])\n",
        "\n",
        "print(machine)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PgKavCGCeDJh",
        "cellView": "form"
      },
      "source": [
        "#@title Get the Avatar images from the web\n",
        "#@markdown 1. You can change the URLs to your **own** stuff!\n",
        "#@markdown 2. Alternatively, you can upload **local** files in the next cell\n",
        "\n",
        "image1_url = 'https://www.beat.com.au/wp-content/uploads/2018/05/ilana.jpg' #@param {type:\"string\"}\n",
        "image2_url = 'https://img.zeit.de/zeit-magazin/2017-03/marina-abramovic-performance-kuenstlerin-the-cleaner-monografie-oevre-bilder/marina-abramovic-performance-kuenstlerin-the-cleaner-monografie-oevre-10.jpg/imagegroup/original__620x620__desktop' #@param {type:\"string\"}\n",
        "image3_url = 'https://i.pinimg.com/originals/27/86/58/2786580674b7c9b20ead54f53bf0be9e.jpg' #@param {type:\"string\"}\n",
        "\n",
        "if image1_url:\n",
        "  !wget \"$image1_url\" -O /content/image1\n",
        "\n",
        "if image2_url:\n",
        "  !wget \"$image2_url\" -O /content/image2\n",
        "\n",
        "if image3_url:\n",
        "  !wget \"$image3_url\" -O /content/image3"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Y0NvZb7M9fQh",
        "cellView": "form"
      },
      "source": [
        "#@title Optionally upload local Avatar images { run: \"auto\" }\n",
        "manually_upload_images = False #@param {type:\"boolean\"}\n",
        "if manually_upload_images:\n",
        "  from google.colab import files\n",
        "  import shutil\n",
        "\n",
        "  %cd /content/sample_data\n",
        "  try:\n",
        "    uploaded = files.upload()\n",
        "  except Exception as e:\n",
        "    %cd /content\n",
        "    raise e\n",
        "\n",
        "  for i,fn in enumerate(uploaded, start=1):\n",
        "    shutil.move('/content/sample_data/'+fn, '/content/image%d'%i)\n",
        "    if i==3:\n",
        "      break\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DsbBpNw5nu0l"
      },
      "source": [
        "#@title Prepare assets\n",
        "center_image1_to_head = True #@param {type:\"boolean\"}\n",
        "crop_image1_to_head = False #@param {type:\"boolean\"}\n",
        "image1_crop_expansion_factor = 2.5 #@param {type:\"number\"}\n",
        "\n",
        "center_image2_to_head = True #@param {type:\"boolean\"}\n",
        "crop_image2_to_head = True #@param {type:\"boolean\"}\n",
        "image2_crop_expansion_factor = 2.5 #@param {type:\"number\"}\n",
        "\n",
        "center_image3_to_head = True #@param {type:\"boolean\"}\n",
        "crop_image3_to_head = False #@param {type:\"boolean\"}\n",
        "image3_crop_expansion_factor = 2.5 #@param {type:\"number\"}\n",
        "\n",
        "center_image_to_head = (center_image1_to_head, center_image2_to_head, center_image3_to_head)\n",
        "crop_image_to_head = (crop_image1_to_head, crop_image2_to_head, crop_image3_to_head)\n",
        "image_crop_expansion_factor = (image1_crop_expansion_factor, image2_crop_expansion_factor, image3_crop_expansion_factor)\n",
        "\n",
        "import imageio\n",
        "import numpy as np\n",
        "from google.colab.patches import cv2_imshow\n",
        "from skimage.transform import resize\n",
        "\n",
        "import face_alignment\n",
        "import torch\n",
        "\n",
        "if not hasattr(face_alignment.utils, '_original_transform'):\n",
        "    face_alignment.utils._original_transform = face_alignment.utils.transform\n",
        "\n",
        "def patched_transform(point, center, scale, resolution, invert=False):\n",
        "    return face_alignment.utils._original_transform(\n",
        "        point, center, torch.tensor(scale, dtype=torch.float32), torch.tensor(resolution, dtype=torch.float32), invert)\n",
        "\n",
        "face_alignment.utils.transform = patched_transform\n",
        "\n",
        "try:\n",
        "  fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=True,\n",
        "                                      device='cuda')\n",
        "except Exception:\n",
        "  !rm -rf /root/.cache/torch/hub/checkpoints/s3fd-619a316812.pth\n",
        "  !rm -rf /root/.cache/torch/hub/checkpoints/2DFAN4-11f355bf06.pth.tar\n",
        "  fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=True,\n",
        "                                      device='cuda')\n",
        "\n",
        "def create_bounding_box(target_landmarks, expansion_factor=1):\n",
        "    target_landmarks = np.array(target_landmarks)\n",
        "    x_y_min = target_landmarks.reshape(-1, 68, 2).min(axis=1)\n",
        "    x_y_max = target_landmarks.reshape(-1, 68, 2).max(axis=1)\n",
        "    expansion_factor = (expansion_factor-1)/2\n",
        "    bb_expansion_x = (x_y_max[:, 0] - x_y_min[:, 0]) * expansion_factor\n",
        "    bb_expansion_y = (x_y_max[:, 1] - x_y_min[:, 1]) * expansion_factor\n",
        "    x_y_min[:, 0] -= bb_expansion_x\n",
        "    x_y_max[:, 0] += bb_expansion_x\n",
        "    x_y_min[:, 1] -= bb_expansion_y\n",
        "    x_y_max[:, 1] += bb_expansion_y\n",
        "    return np.hstack((x_y_min, x_y_max-x_y_min))\n",
        "\n",
        "def fix_dims(im):\n",
        "    if im.ndim == 2:\n",
        "        im = np.tile(im[..., None], [1, 1, 3])\n",
        "    return im[...,:3]\n",
        "\n",
        "def get_crop(im, center_face=True, crop_face=True, expansion_factor=1, landmarks=None):\n",
        "    im = fix_dims(im)\n",
        "    if (center_face or crop_face) and not landmarks:\n",
        "        landmarks = fa.get_landmarks_from_image(im)\n",
        "    if (center_face or crop_face) and landmarks:\n",
        "        rects = create_bounding_box(landmarks, expansion_factor=expansion_factor)\n",
        "        x0,y0,w,h = sorted(rects, key=lambda x: x[2]*x[3])[-1]\n",
        "        if crop_face:\n",
        "            s = max(h, w)\n",
        "            x0 += (w-s)//2\n",
        "            x1 = x0 + s\n",
        "            y0 += (h-s)//2\n",
        "            y1 = y0 + s\n",
        "        else:\n",
        "            img_h,img_w = im.shape[:2]\n",
        "            img_s = min(img_h,img_w)\n",
        "            x0 = min(max(0, x0+(w-img_s)//2), img_w-img_s)\n",
        "            x1 = x0 + img_s\n",
        "            y0 = min(max(0, y0+(h-img_s)//2), img_h-img_s)\n",
        "            y1 = y0 + img_s\n",
        "    else:\n",
        "        h,w = im.shape[:2]\n",
        "        s = min(h,w)\n",
        "        x0 = (w-s)//2\n",
        "        x1 = x0 + s\n",
        "        y0 = (h-s)//2\n",
        "        y1 = y0 + s\n",
        "    return int(x0),int(x1),int(y0),int(y1)\n",
        "\n",
        "def pad_crop_resize(im, x0=None, x1=None, y0=None, y1=None, new_h=256, new_w=256):\n",
        "    im = fix_dims(im)\n",
        "    h,w = im.shape[:2]\n",
        "    if x0 is None:\n",
        "      x0 = 0\n",
        "    if x1 is None:\n",
        "      x1 = w\n",
        "    if y0 is None:\n",
        "      y0 = 0\n",
        "    if y1 is None:\n",
        "      y1 = h\n",
        "    if x0<0 or x1>w or y0<0 or y1>h:\n",
        "        im = np.pad(im, pad_width=[(max(-y0,0),max(y1-h,0)),(max(-x0,0),max(x1-w,0)),(0,0)], mode='edge')\n",
        "    im = im[max(y0,0):y1-min(y0,0),max(x0,0):x1-min(x0,0)]\n",
        "    if new_h is not None or new_w is not None:\n",
        "        im = resize(im, (im.shape[0] if new_h is None else new_h, im.shape[1] if new_w is None else new_w))\n",
        "    return im\n",
        "\n",
        "source_image = []\n",
        "orig_image = []\n",
        "for i in range(3):\n",
        "    img = imageio.imread('/content/image%d'%(i+1))\n",
        "    img = pad_crop_resize(img, *get_crop(img, center_face=center_image_to_head[i], crop_face=crop_image_to_head[i], expansion_factor=image_crop_expansion_factor[i]), new_h=None, new_w=None)\n",
        "    orig_image.append(img)\n",
        "    source_image.append(resize(img, (256,256)))\n",
        "num_avatars = len(source_image)\n",
        "\n",
        "cv2_imshow(np.hstack(source_image)[...,::-1]*255)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MOZ0UZZa-xzI",
        "cellView": "form"
      },
      "source": [
        "#@title Modify signaling.py\n",
        "\n",
        "%%writefile /content/colabrtc/colabrtc/signaling.py\n",
        "import json\n",
        "import logging\n",
        "import random\n",
        "import IPython\n",
        "import asyncio\n",
        "\n",
        "from aiortc import RTCIceCandidate, RTCSessionDescription\n",
        "from aiortc.contrib.signaling import object_from_string, object_to_string, BYE\n",
        "from aiortc.contrib.signaling import ApprtcSignaling\n",
        "\n",
        "from server import FilesystemRTCServer\n",
        "\n",
        "try:\n",
        "    import aiohttp\n",
        "    import websockets\n",
        "except ImportError:  # pragma: no cover\n",
        "    aiohttp = None\n",
        "    websockets = None\n",
        "\n",
        "logger = logging.getLogger(\"colabrtc.signaling\")\n",
        "\n",
        "try:\n",
        "    from google.colab import output\n",
        "except ImportError:\n",
        "    output = None\n",
        "    logger.info('google.colab not available')\n",
        "\n",
        "\n",
        "class ColabApprtcSignaling(ApprtcSignaling):\n",
        "    def __init__(self, room=None, javacript_callable=False):\n",
        "        super().__init__(room)\n",
        "\n",
        "        self._javascript_callable = javacript_callable\n",
        "\n",
        "        if output and javacript_callable:\n",
        "            output.register_callback(f'{room}.colab.signaling.connect', self.connect_sync)\n",
        "            output.register_callback(f'{room}.colab.signaling.send', self.send_sync)\n",
        "            output.register_callback(f'{room}.colab.signaling.receive', self.receive_sync)\n",
        "            output.register_callback(f'{room}.colab.signaling.close', self.close_sync)\n",
        "\n",
        "    @property\n",
        "    def room(self):\n",
        "        return self._room\n",
        "\n",
        "    async def connect(self):\n",
        "        join_url = self._origin + \"/join/\" + self._room\n",
        "\n",
        "        # fetch room parameters\n",
        "        self._http = aiohttp.ClientSession()\n",
        "        async with self._http.post(join_url) as response:\n",
        "            # we cannot use response.json() due to:\n",
        "            # https://github.com/webrtc/apprtc/issues/562\n",
        "            data = json.loads(await response.text())\n",
        "        assert data[\"result\"] == \"SUCCESS\"\n",
        "        params = data[\"params\"]\n",
        "\n",
        "        self.__is_initiator = params[\"is_initiator\"] == \"true\"\n",
        "        self.__messages = params[\"messages\"]\n",
        "        self.__post_url = (\n",
        "            self._origin + \"/message/\" + self._room + \"/\" + params[\"client_id\"]\n",
        "        )\n",
        "\n",
        "        # connect to websocket\n",
        "        self._websocket = await websockets.connect(\n",
        "            params[\"wss_url\"], extra_headers={\"Origin\": self._origin}\n",
        "        )\n",
        "        await self._websocket.send(\n",
        "            json.dumps(\n",
        "                {\n",
        "                    \"clientid\": params[\"client_id\"],\n",
        "                    \"cmd\": \"register\",\n",
        "                    \"roomid\": params[\"room_id\"],\n",
        "                }\n",
        "            )\n",
        "        )\n",
        "\n",
        "        print(f\"AppRTC room is {params['room_id']} {params['room_link']}\")\n",
        "\n",
        "        return params\n",
        "\n",
        "    def connect_sync(self):\n",
        "        loop = asyncio.get_event_loop()\n",
        "        result = loop.run_until_complete(self.connect())\n",
        "        if self._javascript_callable:\n",
        "            return IPython.display.JSON(result)\n",
        "        return result\n",
        "\n",
        "    def close_sync(self):\n",
        "        loop = asyncio.get_event_loop()\n",
        "        return loop.run_until_complete(self.close())\n",
        "\n",
        "    def recv_nowait(self):\n",
        "        try:\n",
        "            return self._websocket.messages.popright() # .get_nowait()\n",
        "        #except (asyncio.queues.QueueEmpty, IndexError):\n",
        "        except IndexError:\n",
        "            pass\n",
        "\n",
        "    async def receive(self):\n",
        "        if self.__messages:\n",
        "            message = self.__messages.pop()\n",
        "        else:\n",
        "            message = self.recv_nowait()\n",
        "            if message:\n",
        "                message = json.loads(message)[\"msg\"]\n",
        "\n",
        "        if message:\n",
        "            logger.debug(\"< \" + message)\n",
        "            return object_from_string(message)\n",
        "\n",
        "    def receive_sync(self):\n",
        "        loop = asyncio.get_event_loop()\n",
        "        message = loop.run_until_complete(self.receive())\n",
        "        if message and self._javascript_callable:\n",
        "            message = object_to_string(message)\n",
        "            print('receive:', message)\n",
        "            message = json.loads(message)\n",
        "            message = IPython.display.JSON(message)\n",
        "        return message\n",
        "\n",
        "    async def send(self, obj):\n",
        "        message = object_to_string(obj)\n",
        "        logger.debug(\"> \" + message)\n",
        "        if self.__is_initiator:\n",
        "            await self._http.post(self.__post_url, data=message)\n",
        "        else:\n",
        "            await self._websocket.send(json.dumps({\"cmd\": \"send\", \"msg\": message}))\n",
        "\n",
        "    def send_sync(self, message):\n",
        "        print('send:', message)\n",
        "        if type(message) == str:\n",
        "            message_json = json.loads(message)\n",
        "            if 'candidate' in message_json:\n",
        "                message_json['type'] = 'candidate'\n",
        "                message_json[\"id\"] = message_json[\"sdpMid\"]\n",
        "                message_json[\"label\"] = message_json[\"sdpMLineIndex\"]\n",
        "                message = json.dumps(message_json)\n",
        "                message = object_from_string(message)\n",
        "        loop = asyncio.get_event_loop()\n",
        "        return loop.run_until_complete(self.send(message))\n",
        "\n",
        "\n",
        "class ColabSignaling:\n",
        "    def __init__(self, signaling_folder=None, webrtc_server=None, room=None, javacript_callable=False):\n",
        "        if room is None:\n",
        "            room = \"\".join([random.choice(\"0123456789\") for x in range(10)])\n",
        "\n",
        "        if webrtc_server is None and signaling_folder is None:\n",
        "            raise ValueError('Either a WebRTC server or a signaling folder must be provided.')\n",
        "        if webrtc_server is None:\n",
        "            self._webrtc_server = FilesystemRTCServer(folder=signaling_folder)\n",
        "        else:\n",
        "            self._webrtc_server = webrtc_server\n",
        "\n",
        "        self._room = room\n",
        "        self._javascript_callable = javacript_callable\n",
        "\n",
        "        if output and javacript_callable:\n",
        "            output.register_callback(f'{room}.colab.signaling.connect', self.connect_sync)\n",
        "            output.register_callback(f'{room}.colab.signaling.send', self.send_sync)\n",
        "            output.register_callback(f'{room}.colab.signaling.receive', self.receive_sync)\n",
        "            output.register_callback(f'{room}.colab.signaling.close', self.close_sync)\n",
        "\n",
        "    @property\n",
        "    def room(self):\n",
        "        return self._room\n",
        "\n",
        "    async def connect(self):\n",
        "        data = self._webrtc_server.join(self._room)\n",
        "        assert data[\"result\"] == \"SUCCESS\"\n",
        "        params = data[\"params\"]\n",
        "\n",
        "        self.__is_initiator = params[\"is_initiator\"] == \"true\"\n",
        "        self.__messages = params[\"messages\"]\n",
        "        self.__peer_id = params[\"peer_id\"]\n",
        "\n",
        "        logger.info(f\"Room ID: {params['room_id']}\")\n",
        "        logger.info(f\"Peer ID: {self.__peer_id}\")\n",
        "        return params\n",
        "\n",
        "    def connect_sync(self):\n",
        "        loop = asyncio.get_event_loop()\n",
        "        result = loop.run_until_complete(self.connect())\n",
        "        if self._javascript_callable:\n",
        "            return IPython.display.JSON(result)\n",
        "        return result\n",
        "\n",
        "    async def close(self):\n",
        "        if self._javascript_callable:\n",
        "            return self.send_sync(BYE)\n",
        "        else:\n",
        "            await self.send(BYE)\n",
        "\n",
        "    def close_sync(self):\n",
        "        loop = asyncio.get_event_loop()\n",
        "        return loop.run_until_complete(self.close())\n",
        "\n",
        "    async def receive(self):\n",
        "        message = self._webrtc_server.receive_message(self._room, self.__peer_id)\n",
        "        # if self._javascript_callable:\n",
        "        #     print('ColabSignaling: sending message to Javascript peer:', message)\n",
        "        # else:\n",
        "        #     print('ColabSignaling: sending message to Python peer:', message)\n",
        "        if message and type(message) == str and not self._javascript_callable:\n",
        "            message = object_from_string(message)\n",
        "        return message\n",
        "\n",
        "    def receive_sync(self):\n",
        "        loop = asyncio.get_event_loop()\n",
        "        message = loop.run_until_complete(self.receive())\n",
        "        if message and self._javascript_callable:\n",
        "            message = json.loads(message)\n",
        "            message = IPython.display.JSON(message)\n",
        "        return message\n",
        "\n",
        "    async def send(self, message):\n",
        "        if not self._javascript_callable or type(message) != str:\n",
        "            message = object_to_string(message)\n",
        "        self._webrtc_server.send_message(self._room, self.__peer_id, message)\n",
        "\n",
        "    def send_sync(self, message):\n",
        "        loop = asyncio.get_event_loop()\n",
        "        return loop.run_until_complete(self.send(message))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "asMKgFvID89j",
        "cellView": "form"
      },
      "source": [
        "#@title Modify peer-ui.js\n",
        "\n",
        "%%writefile /content/colabrtc/colabrtc/js/peer-ui.js\n",
        "var PeerUI = function(room, container_id) {\n",
        "    // Define initial start time of the call (defined as connection between peers).\n",
        "    startTime = null;\n",
        "    constraints = {audio: false, video: true};\n",
        "\n",
        "    let peerDiv = null;\n",
        "\n",
        "    if (container_id) {\n",
        "        peerDiv = document.getElementById(container_id);\n",
        "    } else {\n",
        "        peerDiv = document.createElement('div');\n",
        "        document.body.appendChild(peerDiv);\n",
        "    }\n",
        "\n",
        "    var style = document.createElement('style');\n",
        "    style.type = 'text/css';\n",
        "    style.innerHTML = `\n",
        "        .loader {\n",
        "          position: absolute;\n",
        "          left: 38%;\n",
        "          top: 60%;\n",
        "          z-index: 1;\n",
        "          width: 50px;\n",
        "          height: 50px;\n",
        "          margin: -75px 0 0 -75px;\n",
        "          border: 16px solid #f3f3f3;\n",
        "          border-radius: 50%;\n",
        "          border-top: 16px solid #3498db;\n",
        "          -webkit-animation: spin 2s linear infinite;\n",
        "          animation: spin 2s linear infinite;\n",
        "        }\n",
        "\n",
        "        @keyframes spin {\n",
        "          0% { transform: rotate(0deg); }\n",
        "          100% { transform: rotate(360deg); }\n",
        "        }\n",
        "    `;\n",
        "    document.getElementsByTagName('head')[0].appendChild(style);\n",
        "\n",
        "    var adapter = document.createElement('script');\n",
        "    adapter.setAttribute('src','https://webrtc.github.io/adapter/adapter-latest.js');\n",
        "    document.getElementsByTagName('head')[0].appendChild(adapter);\n",
        "\n",
        "    //peerDiv.style.width = '70%';\n",
        "\n",
        "    // Define video elements.\n",
        "    const videoDiv = document.createElement('div');\n",
        "    videoDiv.style.display = 'none';\n",
        "    videoDiv.style.textAlign = '-webkit-center';\n",
        "    const localView = document.createElement('video');\n",
        "    const remoteView = document.createElement('video');\n",
        "    remoteView.autoplay = true;\n",
        "    //localView.style.display = 'block';\n",
        "    //remoteView.style.display = 'block';\n",
        "    localView.style.display = 'inline';\n",
        "    remoteView.style.display = 'inline';\n",
        "    localView.height = 240;\n",
        "    localView.width = 320;\n",
        "    remoteView.height = 240;\n",
        "    remoteView.width = 320;\n",
        "    videoDiv.appendChild(localView);\n",
        "    videoDiv.appendChild(remoteView);\n",
        "    const loader = document.createElement('div');\n",
        "    loader.style.display = 'none';\n",
        "    loader.className = 'loader';\n",
        "    videoDiv.appendChild(loader);\n",
        "\n",
        "    // Logs a message with the id and size of a video element.\n",
        "    function logVideoLoaded(event) {\n",
        "        const video = event.target;\n",
        "        trace(`${video.id} videoWidth: ${video.videoWidth}px, ` +\n",
        "            `videoHeight: ${video.videoHeight}px.`);\n",
        "\n",
        "        //localView.style.width = '20%';\n",
        "        //localView.style.position = 'absolute';\n",
        "        //remoteView.style.display = 'block';\n",
        "        localView.style.display = 'inline';\n",
        "        remoteView.style.display = 'inline';\n",
        "        //remoteView.style.width = '100%';\n",
        "        //remoteView.style.height = 'auto';\n",
        "        loader.style.display = 'none';\n",
        "        //fullscreenButton.style.display = 'inline';\n",
        "    }\n",
        "\n",
        "    //localView.addEventListener('loadedmetadata', logVideoLoaded);\n",
        "    remoteView.addEventListener('loadedmetadata', logVideoLoaded);\n",
        "    //remoteView.addEventListener('onresize', logResizedVideo);\n",
        "\n",
        "    // Define action buttons.\n",
        "    const controlDiv = document.createElement('div');\n",
        "    controlDiv.style.textAlign = 'center';\n",
        "    const startButton = document.createElement('button');\n",
        "    const fullscreenButton = document.createElement('button');\n",
        "    const hangupButton = document.createElement('button');\n",
        "    startButton.textContent = 'Join room: ' + room;\n",
        "    startButton.style.display = 'none';\n",
        "    fullscreenButton.textContent = 'Fullscreen';\n",
        "    hangupButton.textContent = 'Hangup';\n",
        "    controlDiv.appendChild(startButton);\n",
        "    controlDiv.appendChild(fullscreenButton);\n",
        "    controlDiv.appendChild(hangupButton);\n",
        "\n",
        "    // Set up initial action buttons status: disable call and hangup.\n",
        "    //callButton.disabled = true;\n",
        "    hangupButton.style.display = 'none';\n",
        "    fullscreenButton.style.display = 'none';\n",
        "\n",
        "    peerDiv.appendChild(videoDiv);\n",
        "    peerDiv.appendChild(controlDiv);\n",
        "\n",
        "    this.localView = localView;\n",
        "    this.remoteView = remoteView;\n",
        "    this.peerDiv = peerDiv;\n",
        "    this.videoDiv = videoDiv;\n",
        "    this.loader = loader;\n",
        "    this.startButton = startButton;\n",
        "    this.fullscreenButton = fullscreenButton;\n",
        "    this.hangupButton = hangupButton;\n",
        "    this.constraints = constraints;\n",
        "    this.room = room;\n",
        "\n",
        "    self = this;\n",
        "    async function start() {\n",
        "        await self.connect(this.room);\n",
        "    }\n",
        "\n",
        "    // Handles hangup action: ends up call, closes connections and resets peers.\n",
        "    async function hangup() {\n",
        "        await self.disconnect();\n",
        "    }\n",
        "\n",
        "    function openFullscreen() {\n",
        "      let elem = remoteView;\n",
        "      if (elem.requestFullscreen) {\n",
        "        elem.requestFullscreen();\n",
        "      } else if (elem.mozRequestFullScreen) { /* Firefox */\n",
        "        elem.mozRequestFullScreen();\n",
        "      } else if (elem.webkitRequestFullscreen) { /* Chrome, Safari & Opera */\n",
        "        elem.webkitRequestFullscreen();\n",
        "      } else if (elem.msRequestFullscreen) { /* IE/Edge */\n",
        "        elem.msRequestFullscreen();\n",
        "      }\n",
        "    }\n",
        "\n",
        "    // Add click event handlers for buttons.\n",
        "    this.startButton.addEventListener('click', start);\n",
        "    this.fullscreenButton.addEventListener('click', openFullscreen);\n",
        "    this.hangupButton.addEventListener('click', hangup);\n",
        "    this.startButton.click()\n",
        "};\n",
        "\n",
        "\n",
        "PeerUI.prototype.connect = async function(room) {\n",
        "    //startButton.disabled = true;\n",
        "    const stream = await navigator.mediaDevices.getUserMedia(constraints);\n",
        "    this.localView.srcObject = stream;\n",
        "    this.localView.play();\n",
        "    trace('Received local stream.');\n",
        "\n",
        "    this.loader.style.display = 'block';\n",
        "    this.startButton.style.display = 'none';\n",
        "    //this.localView.style.width = '100%';\n",
        "    //this.localView.style.height = 'auto';\n",
        "    //this.localView.style.position = 'relative';\n",
        "    //this.remoteView.style.display = 'none';\n",
        "    this.videoDiv.style.display = 'block';\n",
        "\n",
        "    if (google) {\n",
        "      // Resize the output to fit the video element.\n",
        "      google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);\n",
        "    }\n",
        "\n",
        "    try {\n",
        "        //this.joinButton.style.display = 'none';\n",
        "        this.hangupButton.style.display = 'inline';\n",
        "\n",
        "        trace('Starting call.');\n",
        "        this.startTime = window.performance.now();\n",
        "\n",
        "        this.peer = new Peer();\n",
        "        await this.peer.connect(this.room);\n",
        "        //const obj = JSON.stringify([this.peer.connect, this.room]);\n",
        "        //this.worker.postMessage([this.peer, this.room]);\n",
        "\n",
        "        this.peer.pc.ontrack = ({track, streams}) => {\n",
        "            // once media for a remote track arrives, show it in the remote video element\n",
        "            track.onunmute = () => {\n",
        "                // don't set srcObject again if it is already set.\n",
        "                if (this.remoteView.srcObject) return;\n",
        "                console.log(streams);\n",
        "                this.remoteView.srcObject = streams[0];\n",
        "                trace('Remote peer connection received remote stream.');\n",
        "                this.remoteView.play();\n",
        "            };\n",
        "        };\n",
        "\n",
        "        const localStream = this.localView.srcObject;\n",
        "        console.log('adding local stream');\n",
        "        await this.peer.addLocalStream(localStream);\n",
        "\n",
        "        await this.peer.waitMessage();\n",
        "\n",
        "    } catch (err) {\n",
        "        console.error(err);\n",
        "    }\n",
        "};\n",
        "\n",
        "PeerUI.prototype.disconnect = async function() {\n",
        "    await this.peer.disconnect();\n",
        "    //this.startButton.style.display = 'inline';\n",
        "    this.startButton.style.display = 'none';\n",
        "    //this.joinButton.style.display = 'inline';\n",
        "    this.hangupButton.style.display = 'none';\n",
        "    this.fullscreenButton.style.display = 'none';\n",
        "    this.videoDiv.style.display = 'none';\n",
        "\n",
        "    trace('Ending call.');\n",
        "    this.localView.srcObject.getVideoTracks()[0].stop();\n",
        "    this.peerDiv.remove();\n",
        "};\n",
        "\n",
        "// Logs an action (text) and the time when it happened on the console.\n",
        "function trace(text) {\n",
        "  text = text.trim();\n",
        "  const now = (window.performance.now() / 1000).toFixed(3);\n",
        "  console.log(now, text);\n",
        "}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_km3rqhph9G3",
        "cellView": "form"
      },
      "source": [
        "#@title Create fomm_live.py\n",
        "\n",
        "%%writefile /content/colabrtc/examples/fomm_live.py\n",
        "import numpy as np\n",
        "import torch\n",
        "\n",
        "def normalize_kp(kp):\n",
        "    kp = kp - kp.mean(axis=0, keepdims=True)\n",
        "    area = ConvexHull(kp[:, :2]).volume\n",
        "    area = np.sqrt(area)\n",
        "    kp[:, :2] = kp[:, :2] / area\n",
        "    return kp\n",
        "\n",
        "def full_normalize_kp(source_area, kp_source, driving_area, kp_driving, kp_driving_initial, adapt_movement_scale=False,\n",
        "                      use_relative_movement=False, use_relative_jacobian=False, exaggerate_factor=1):\n",
        "    if adapt_movement_scale:\n",
        "        adapt_movement_scale = np.sqrt(source_area) / np.sqrt(driving_area)\n",
        "    else:\n",
        "        adapt_movement_scale = 1\n",
        "\n",
        "    kp_new = {k: v for k, v in kp_driving.items()}\n",
        "\n",
        "    if use_relative_movement:\n",
        "        kp_value_diff = (kp_driving['value'] - kp_driving_initial['value'])\n",
        "        kp_value_diff *= adapt_movement_scale * exaggerate_factor\n",
        "        kp_new['value'] = kp_value_diff + kp_source['value']\n",
        "\n",
        "        if use_relative_jacobian:\n",
        "            jacobian_diff = torch.matmul(kp_driving['jacobian'], torch.inverse(kp_driving_initial['jacobian']))\n",
        "            kp_new['jacobian'] = torch.matmul(jacobian_diff, kp_source['jacobian'])\n",
        "\n",
        "    return kp_new\n",
        "\n",
        "def make_animation(source, source_area, kp_source, driving_area, kp_driving_initial, driving_frame, kp_detector,\n",
        "                   generator, adapt_movement_scale=False, use_relative_movement=False,\n",
        "                   use_relative_jacobian=False,\n",
        "                   exaggerate_factor=1, reset=False):\n",
        "\n",
        "    with torch.no_grad():\n",
        "        driving_frame = torch.tensor(driving_frame[np.newaxis].astype(np.float32)).permute(0, 3, 1, 2).cuda()\n",
        "\n",
        "        if kp_driving_initial is None or reset:\n",
        "            kp_driving_initial = kp_detector(driving_frame)\n",
        "            driving_area = ConvexHull(kp_driving_initial['value'][0].data.cpu().numpy()).volume\n",
        "\n",
        "        kp_driving = kp_detector(driving_frame)\n",
        "        kp_norm = full_normalize_kp(source_area=source_area, kp_source=kp_source, driving_area=driving_area,\n",
        "                                    kp_driving=kp_driving, kp_driving_initial=kp_driving_initial,\n",
        "                                    adapt_movement_scale=adapt_movement_scale,\n",
        "                                    use_relative_movement=use_relative_movement,\n",
        "                                    use_relative_jacobian=use_relative_jacobian,\n",
        "                                    exaggerate_factor=exaggerate_factor)\n",
        "        out = generator(source, kp_source=kp_source, kp_driving=kp_norm)\n",
        "\n",
        "        return np.transpose(out['prediction'].data.cpu().numpy(), [0, 2, 3, 1])[0], driving_area, kp_driving_initial\n",
        "\n",
        "import sys\n",
        "sys.path.extend(['/content/colabrtc/colabrtc','/content/first-order-model'])\n",
        "from peer import FrameTransformer\n",
        "from call import ColabCall\n",
        "from scipy.spatial import ConvexHull\n",
        "from skimage.transform import resize\n",
        "class Avatarify(FrameTransformer):\n",
        "\n",
        "    def __init__(self, freq=1. / 30, avatar=0):\n",
        "        self.device = 'cuda' if torch.cuda.is_available() else 'cpu'\n",
        "        self.avatar = avatar\n",
        "        self.freq = freq\n",
        "\n",
        "    def setup(self):\n",
        "        import traceback\n",
        "        from demo import load_checkpoints\n",
        "        import imageio\n",
        "\n",
        "        self.traceback = traceback\n",
        "        self.reset = True\n",
        "        self.kp_driving_initial = None\n",
        "        self.driving_area = None\n",
        "        self.generator, self.kp_detector = load_checkpoints(config_path='/content/first-order-model/config/vox-adv-256.yaml',\n",
        "                                                  checkpoint_path='/content/vox-adv-cpk.pth.tar')\n",
        "\n",
        "\n",
        "        source_image = imageio.imread('/content/image1')  # going extensionless allows more image formats\n",
        "        if source_image.ndim == 2:\n",
        "            source_image = np.tile(source_image[..., None], [1, 1, 3])\n",
        "        h, w = source_image.shape[:2]\n",
        "        s = min(h, w)\n",
        "        source_image = resize(source_image[(h - s) // 2:(h + s) // 2, (w - s) // 2:(w + s) // 2], (256, 256))[..., :3]\n",
        "\n",
        "        with torch.no_grad():\n",
        "            self.source = torch.tensor(source_image[np.newaxis].astype(np.float32)).permute(0, 3, 1, 2).cuda()\n",
        "            self.kp_source = self.kp_detector(self.source)\n",
        "            self.source_area = ConvexHull(self.kp_source['value'][0].data.cpu().numpy()).volume\n",
        "\n",
        "    def transform(self, frame, frame_idx=None, avatar=0):\n",
        "        if self.freq and frame_idx % int(1. / self.freq) != 0:\n",
        "            return\n",
        "\n",
        "        #out_img = frame[...,::-1]\n",
        "        #return\n",
        "\n",
        "        if frame.ndim == 2:\n",
        "            frame = np.tile(frame[..., None], [1, 1, 3])\n",
        "        h, w = frame.shape[:2]\n",
        "        s = min(h, w)\n",
        "        frame = resize(frame[(h - s) // 2:(h + s) // 2, (w - s) // 2:(w + s) // 2], (256, 256))[..., :3]\n",
        "\n",
        "        try:\n",
        "            out_img, self.driving_area, self.kp_driving_initial = make_animation(self.source, self.source_area, self.kp_source, self.driving_area, self.kp_driving_initial, frame, self.kp_detector, self.generator,\n",
        "                                     adapt_movement_scale=True, use_relative_movement=True,\n",
        "                                     use_relative_jacobian=True,\n",
        "                                     exaggerate_factor=1,\n",
        "                                     reset=self.reset)\n",
        "            self.reset = False\n",
        "            out_img = (np.clip(out_img, 0, 1) * 255).astype(np.uint8)[..., ::-1]\n",
        "\n",
        "            return out_img\n",
        "        except Exception as err:\n",
        "            self.traceback.print_exc()\n",
        "            return frame\n",
        "\n",
        "def run(room=None, signaling_folder='/content/webrtc', avatar=0, frame_freq=1. / 10, verbose=False):\n",
        "    if room:\n",
        "        room = str(room)\n",
        "\n",
        "    afy = Avatarify(freq=frame_freq, avatar=avatar)\n",
        "    call = ColabCall()\n",
        "    call.create(room, signaling_folder=signaling_folder, verbose=verbose,\n",
        "                frame_transformer=afy, multiprocess=False)\n",
        "\n",
        "import fire\n",
        "if __name__ == '__main__':\n",
        "    fire.Fire(run)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "g8qFmqu1J7-j",
        "cellView": "form"
      },
      "source": [
        "#@title Go live!\n",
        "\n",
        "#exaggerate_factor = 1 #@param {type:\"slider\", min:0.1, max:5, step:0.1}\n",
        "#adapt_movement_scale = True #@param {type:\"boolean\"}\n",
        "#use_relative_movement = True #@param {type:\"boolean\"}\n",
        "#use_relative_jacobian = True #@param {type:\"boolean\"}\n",
        "\n",
        "!pkill -f fomm_live.py\n",
        "!rm -rf /content/webrtc\n",
        "!rm -f /content/nohup.txt\n",
        "\n",
        "# Due to multiprocessing support limitations, we need to run the Python peer via commandline.\n",
        "!nohup python3 /content/colabrtc/examples/fomm_live.py \\\n",
        "--room 237 --avatar 0 > /content/nohup.txt 2>&1 &\n",
        "\n",
        "import os\n",
        "from time import sleep\n",
        "\n",
        "while True:\n",
        "  if os.path.exists('/content/nohup.txt'):\n",
        "    with open('/content/nohup.txt') as f:\n",
        "      if 'INFO:colabrtc.signaling:Peer ID:' in f.read():\n",
        "        break\n",
        "  sleep(10)\n",
        "\n",
        "from call import ColabCall\n",
        "call = ColabCall()\n",
        "call.join(room='237')"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}