{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Project 2: Panoramic Image Stitching\n",
    "\n",
    "This is Project 2 for [UW CSE P576 Computer Vision](https://courses.cs.washington.edu/courses/csep576/20sp). \n",
    "\n",
    "**Getting Started:** You should complete **[Project 1](https://courses.cs.washington.edu/courses/csep576/20sp/projects/Project1.html \"Project 1\")** first (you will need interest points and descriptors from this project). The source files for both projects are [here](https://courses.cs.washington.edu/courses/csep576/20sp/projects/project12/project12.zip \"Project 1 and 2 Source Files\"). To run the project locally you will need IPython/Jupyter installed, e.g., see instructions at http://jupyter.org/install.html. The notebooks are written for Python 3.x. Launch Jupyter and open `Project2.ipynb`. Alternatively, you can import the standalone version of the notebook into [Colaboratory](https://colab.research.google.com \"Colab\") and run it without installing anything. Use File->Upload Notebook in Colab and open the notebook in `standalone/Project2s.ipynb`.\n",
    "\n",
    "**This project:** In this project you will implement a panoramic image stitcher. This will build on the interest points and descriptors developed in Project 1. You'll begin with geometric filtering via RANSAC, then estimate pairwise rotations and chain these together to align the panorama. When you have a basic stitcher working, improve it with better alignment, blending, or other new features and document your findings.\n",
    "\n",
    "**What to turn in:** Turn in your completed ipynb notebook as well as any source .py files that you modified. Clearly describe any enhancements or experiments you tried in your ipynb notebook. Put everything in a single flat zipfile and upload via the link in Canvas.\n",
    "\n",
    "`version 040920`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import scipy.linalg\n",
    "import os.path\n",
    "from time import time\n",
    "import types\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "#import im_util\n",
    "#import interest_point\n",
    "#import ransac\n",
    "#import geometry\n",
    "#import render\n",
    "#import panorama\n",
    "\n",
    "%matplotlib inline\n",
    "# edit this line to change the figure size\n",
    "plt.rcParams['figure.figsize'] = (16.0, 10.0)\n",
    "# force auto-reload of import modules before running code \n",
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget -nc https://courses.cs.washington.edu/courses/csep576/18sp/projects/project12/pano_images.zip && unzip -n -d data pano_images.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### im_util.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2017 Google Inc.\n",
    "\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import numpy as np\n",
    "import PIL.Image as pil\n",
    "import scipy.signal as sps\n",
    "import matplotlib.pyplot as plt\n",
    "from scipy.ndimage import map_coordinates\n",
    "\n",
    "def convolve_1d(x, k):\n",
    "  \"\"\"\n",
    "  Convolve vector x with kernel k\n",
    "\n",
    "  Inputs: x=input vector (Nx)\n",
    "          k=input kernel (Nk)\n",
    "\n",
    "  Outputs: y=output vector (Nx)\n",
    "  \"\"\"\n",
    "  y=np.zeros_like(x)\n",
    "\n",
    "  \"\"\"\n",
    "  *******************************************\n",
    "  *** TODO: write code to perform convolution\n",
    "  *******************************************\n",
    "\n",
    "  The output should be the same size as the input\n",
    "  You can assume zero padding, and an odd-sized kernel\n",
    "  \"\"\"\n",
    "\n",
    "\n",
    "  \"\"\"\n",
    "  *******************************************\n",
    "  \"\"\"\n",
    "\n",
    "  return y\n",
    "\n",
    "def convolve_rows(im, k):\n",
    "  \"\"\"\n",
    "  Convolve image im with kernel k\n",
    "\n",
    "  Inputs: im=input image (H, W, B)\n",
    "          k=1D convolution kernel (N)\n",
    "\n",
    "  Outputs: im_out=output image (H, W, B)\n",
    "  \"\"\"\n",
    "  im_out = np.zeros_like(im)\n",
    "\n",
    "  \"\"\"\n",
    "  *****************************************\n",
    "  *** TODO: write code to convolve an image\n",
    "  *****************************************\n",
    "\n",
    "  Convolve the rows of image im with kernel k\n",
    "  The output should be the same size as the input\n",
    "  You can assume zero padding, and an odd-sized kernel\n",
    "  \"\"\"\n",
    "\n",
    "\n",
    "  \"\"\"\n",
    "  *****************************************\n",
    "  \"\"\"\n",
    "\n",
    "  return im_out\n",
    "\n",
    "def gauss_kernel(sigma):\n",
    "  \"\"\"\n",
    "  1D Gauss kernel of standard deviation sigma\n",
    "  \"\"\"\n",
    "  l = int(np.ceil(2 * sigma))\n",
    "  x = np.linspace(-l, l, 2*l+1)\n",
    "\n",
    "  # FORNOW\n",
    "  gx = np.zeros_like(x)\n",
    "\n",
    "  \"\"\"\n",
    "  *******************************************\n",
    "  *** TODO: compute gaussian kernel at each x\n",
    "  *******************************************\n",
    "  \"\"\"\n",
    "\n",
    "\n",
    "  \"\"\"\n",
    "  *******************************************\n",
    "  \"\"\"\n",
    "\n",
    "  gx = np.expand_dims(gx,0)\n",
    "  return gx\n",
    "\n",
    "def convolve_gaussian(im, sigma):\n",
    "  \"\"\"\n",
    "  2D gaussian convolution\n",
    "  \"\"\"\n",
    "  imc=np.zeros_like(im)\n",
    "\n",
    "  \"\"\"\n",
    "  ***************************************\n",
    "  *** TODO separable gaussian convolution\n",
    "  ***************************************\n",
    "  \"\"\"\n",
    "\n",
    "\n",
    "  \"\"\"\n",
    "  ***************************************\n",
    "  \"\"\"\n",
    "  return imc\n",
    "\n",
    "def compute_gradients(img):\n",
    "\n",
    "  Ix=np.zeros_like(img)\n",
    "  Iy=np.zeros_like(img)\n",
    "\n",
    "  \"\"\"\n",
    "  ***********************************************\n",
    "  *** TODO: write code to compute image gradients\n",
    "  ***********************************************\n",
    "  \"\"\"\n",
    "\n",
    "\n",
    "  \"\"\"\n",
    "  ***********************************************\n",
    "  \"\"\"\n",
    "  return Ix, Iy\n",
    "\n",
    "def image_open(filename):\n",
    "  \"\"\"\n",
    "  Returns a numpy float image with values in the range (0,1)\n",
    "  \"\"\"\n",
    "  pil_im = pil.open(filename)\n",
    "  im_np = np.array(pil_im).astype(np.float32)\n",
    "  im_np /= 255.0\n",
    "  return im_np\n",
    "\n",
    "def image_save(im_np, filename):\n",
    "  \"\"\"\n",
    "  Saves a numpy float image to file\n",
    "  \"\"\"\n",
    "  if (len(im_np.shape)==2):\n",
    "    im_np = np.expand_dims(im_np, 2)\n",
    "  if (im_np.shape[2]==1):\n",
    "    im_np= np.repeat(im_np, 3, axis=2)\n",
    "  im_np = np.maximum(0.0, np.minimum(im_np, 1.0))\n",
    "  pil_im = pil.fromarray((im_np*255).astype(np.uint8))\n",
    "  pil_im.save(filename)\n",
    "\n",
    "def image_figure(im, dpi=100):\n",
    "  \"\"\"\n",
    "  Creates a matplotlib figure around an image,\n",
    "  useful for writing to file with savefig()\n",
    "  \"\"\"\n",
    "  H,W,_=im.shape\n",
    "  fig=plt.figure()\n",
    "  fig.set_size_inches(W/dpi, H/dpi)\n",
    "  ax=fig.add_axes([0,0,1,1])\n",
    "  ax.imshow(im)\n",
    "  return fig, ax\n",
    "\n",
    "def plot_two_images(im1, im2):\n",
    "  \"\"\"\n",
    "  Plot two images and return axis handles\n",
    "  \"\"\"\n",
    "  ax1=plt.subplot(1,2,1)\n",
    "  plt.imshow(im1)\n",
    "  plt.axis('off')\n",
    "  ax2=plt.subplot(1,2,2)\n",
    "  plt.imshow(im2)\n",
    "  plt.axis('off')\n",
    "  return ax1, ax2\n",
    "\n",
    "def normalise_01(im):\n",
    "  \"\"\"\n",
    "  Normalise image to the range (0,1)\n",
    "  \"\"\"\n",
    "  mx = im.max()\n",
    "  mn = im.min()\n",
    "  den = mx-mn\n",
    "  small_val = 1e-9\n",
    "  if (den < small_val):\n",
    "    print('image normalise_01 -- divisor is very small')\n",
    "    den = small_val\n",
    "  return (im-mn)/den\n",
    "\n",
    "def grey_to_rgb(img):\n",
    "  \"\"\"\n",
    "  Convert greyscale to rgb image\n",
    "  \"\"\"\n",
    "  if (len(img.shape)==2):\n",
    "    img = np.expand_dims(img, 2)\n",
    "\n",
    "  img3 = np.repeat(img, 3, 2)\n",
    "  return img3\n",
    "\n",
    "def disc_mask(l):\n",
    "  \"\"\"\n",
    "  Create a binary cirular mask of radius l\n",
    "  \"\"\"\n",
    "  sz = 2 * l + 1\n",
    "  m = np.zeros((sz,sz))\n",
    "  x = np.linspace(-l,l,2*l+1)/l\n",
    "  x = np.expand_dims(x, 1)\n",
    "  m = x**2\n",
    "  m = m + m.T\n",
    "  m = m<1\n",
    "  m = np.expand_dims(m, 2)\n",
    "  return m\n",
    "\n",
    "def convolve(im, kernel):\n",
    "  \"\"\"\n",
    "  Wrapper for scipy convolution function\n",
    "  This implements a general 2D convolution of image im with kernel\n",
    "  Note that strictly speaking this is correlation not convolution\n",
    "\n",
    "  Inputs: im=input image (H, W, B) or (H, W)\n",
    "          kernel=kernel (kH, kW)\n",
    "\n",
    "  Outputs: imc=output image (H, W, B)\n",
    "  \"\"\"\n",
    "  if (len(im.shape)==2):\n",
    "    im = np.expand_dims(im, 2)\n",
    "  H, W, B = im.shape\n",
    "  imc = np.zeros((H, W, B))\n",
    "  for band in range(B):\n",
    "    imc[:, :, band] = sps.correlate2d(im[:, :, band], kernel, mode='same')\n",
    "  return imc\n",
    "\n",
    "def coordinate_image(num_rows,num_cols,r0,r1,c0,c1):\n",
    "  \"\"\"\n",
    "  Creates an image size num_rows, num_cols\n",
    "  with coordinates linearly spaced in from r0->r1 and c0->c1\n",
    "  \"\"\"\n",
    "  rval=np.linspace(r0,r1,num_rows)\n",
    "  cval=np.linspace(c0,c1,num_cols)\n",
    "  c,r=np.meshgrid(cval,rval)\n",
    "  M = np.stack([r,c,np.ones(r.shape)],-1)\n",
    "  return M\n",
    "\n",
    "def transform_coordinates(coord_image, M):\n",
    "  \"\"\"\n",
    "  Transform an image containing row,col,1 coordinates by matrix M\n",
    "  \"\"\"\n",
    "  M=np.expand_dims(M,2)\n",
    "  uh=np.dot(coord_image,M.T)\n",
    "  uh=uh[:, :, 0, :]\n",
    "  uh=uh/np.expand_dims(uh[:, :, 2],2)\n",
    "  return uh\n",
    "\n",
    "def warp_image(im, coords):\n",
    "  \"\"\"\n",
    "  Warp image im using row,col,1 image coords\n",
    "  \"\"\"\n",
    "  im_rows,im_cols,im_bands=im.shape\n",
    "  warp_rows,warp_cols,_=coords.shape\n",
    "  map_coords=np.zeros((3,warp_rows,warp_cols,im_bands))\n",
    "  for b in range(im_bands):\n",
    "    map_coords[0,:,:,b]=coords[:,:,0]\n",
    "    map_coords[1,:,:,b]=coords[:,:,1]\n",
    "    map_coords[2,:,:,b]=b\n",
    "  warp_im = map_coordinates(im, map_coords, order=1)\n",
    "  return warp_im\n",
    "\n",
    "# allow accessing these functions by im_util.*\n",
    "im_util=types.SimpleNamespace()\n",
    "im_util.convolve_1d=convolve_1d\n",
    "im_util.convolve_rows=convolve_rows\n",
    "im_util.gauss_kernel=gauss_kernel\n",
    "im_util.convolve_gaussian=convolve_gaussian\n",
    "im_util.compute_gradients=compute_gradients\n",
    "im_util.image_open=image_open\n",
    "im_util.image_save=image_save\n",
    "im_util.image_figure=image_figure\n",
    "im_util.plot_two_images=plot_two_images\n",
    "im_util.normalise_01=normalise_01\n",
    "im_util.grey_to_rgb=grey_to_rgb\n",
    "im_util.disc_mask=disc_mask\n",
    "im_util.convolve=convolve\n",
    "im_util.coordinate_image=coordinate_image\n",
    "im_util.transform_coordinates=transform_coordinates\n",
    "im_util.warp_image=warp_image"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### interest_point.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2017 Google Inc.\n",
    "\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import numpy as np\n",
    "import scipy.ndimage.filters as filters\n",
    "from scipy.ndimage import map_coordinates\n",
    "from matplotlib.patches import Circle\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "#import im_util\n",
    "\n",
    "class InterestPointExtractor:\n",
    "  \"\"\"\n",
    "  Class to extract interest points from an image\n",
    "  \"\"\"\n",
    "  def __init__(self):\n",
    "    self.params={}\n",
    "    self.params['border_pixels']=10\n",
    "    self.params['strength_threshold_percentile']=95\n",
    "    self.params['supression_radius_frac']=0.01\n",
    "\n",
    "  def find_interest_points(self, img):\n",
    "    \"\"\"\n",
    "    Find interest points in greyscale image img\n",
    "\n",
    "    Inputs: img=greyscale input image (H, W, 1)\n",
    "\n",
    "    Outputs: ip=interest points of shape (2, N)\n",
    "    \"\"\"\n",
    "    ip_fun = self.corner_function(img)\n",
    "    row, col = self.find_local_maxima(ip_fun)\n",
    "\n",
    "    ip = np.stack((row,col))\n",
    "    return ip\n",
    "\n",
    "  def corner_function(self, img):\n",
    "    \"\"\"\n",
    "    Compute corner strength function in image im\n",
    "\n",
    "    Inputs: img=grayscale input image (H, W, 1)\n",
    "\n",
    "    Outputs: ip_fun=interest point strength function (H, W, 1)\n",
    "    \"\"\"\n",
    "\n",
    "    H, W, _ = img.shape\n",
    "\n",
    "    # FORNOW: random interest point function\n",
    "    ip_fun = np.random.randn(H, W, 1)\n",
    "\n",
    "    \"\"\"\n",
    "    **********************************************************\n",
    "    *** TODO: write code to compute a corner strength function\n",
    "    **********************************************************\n",
    "    \"\"\"\n",
    "\n",
    "\n",
    "    \"\"\"\n",
    "    **********************************************************\n",
    "    \"\"\"\n",
    "\n",
    "    return ip_fun\n",
    "\n",
    "  def find_local_maxima(self, ip_fun):\n",
    "    \"\"\"\n",
    "    Find local maxima in interest point strength function\n",
    "\n",
    "    Inputs: ip_fun=corner strength function (H, W, 1)\n",
    "\n",
    "    Outputs: row,col=coordinates of interest points\n",
    "    \"\"\"\n",
    "\n",
    "    H, W, _ = ip_fun.shape\n",
    "\n",
    "    # radius for non-maximal suppression\n",
    "    suppression_radius_pixels = int(self.params['supression_radius_frac']*max(H, W))\n",
    "\n",
    "    # minimum of strength function for corners\n",
    "    strength_threshold=np.percentile(ip_fun, self.params['strength_threshold_percentile'])\n",
    "\n",
    "    # don't return interest points within border_pixels of edge\n",
    "    border_pixels = self.params['border_pixels']\n",
    "\n",
    "    # row and column coordinates of interest points\n",
    "    row = []\n",
    "    col = []\n",
    "\n",
    "    # FORNOW: random row and column coordinates\n",
    "    row = np.random.randint(0,H,100)\n",
    "    col = np.random.randint(0,W,100)\n",
    "\n",
    "    \"\"\"\n",
    "    ***************************************************\n",
    "    *** TODO: write code to find local maxima in ip_fun\n",
    "    ***************************************************\n",
    "\n",
    "    Hint: try scipy filters.maximum_filter with im_util.disc_mask\n",
    "    \"\"\"\n",
    "\n",
    "\n",
    "    \"\"\"\n",
    "    ***************************************************\n",
    "    \"\"\"\n",
    "\n",
    "    return row, col\n",
    "\n",
    "class DescriptorExtractor:\n",
    "  \"\"\"\n",
    "  Extract descriptors around interest points\n",
    "  \"\"\"\n",
    "  def __init__(self):\n",
    "    self.params={}\n",
    "    self.params['patch_size']=8\n",
    "    self.params['ratio_threshold']=1.0\n",
    "\n",
    "  def get_descriptors(self, img, ip):\n",
    "    \"\"\"\n",
    "    Extact descriptors from grayscale image img at interest points ip\n",
    "\n",
    "    Inputs: img=grayscale input image (H, W, 1)\n",
    "            ip=interest point coordinates (2, N)\n",
    "\n",
    "    Returns: descriptors=vectorized descriptors (N, num_dims)\n",
    "    \"\"\"\n",
    "    patch_size=self.params['patch_size']\n",
    "    patch_size_div2=int(patch_size/2)\n",
    "    num_dims=patch_size**2\n",
    "\n",
    "    H,W,_=img.shape\n",
    "    num_ip=ip.shape[1]\n",
    "    descriptors=np.zeros((num_ip,num_dims))\n",
    "\n",
    "\n",
    "    for i in range(num_ip):\n",
    "      row=ip[0,i]\n",
    "      col=ip[1,i]\n",
    "\n",
    "      # FORNOW: random image patch\n",
    "      patch=np.random.randn(patch_size,patch_size)\n",
    "\n",
    "      \"\"\"\n",
    "      ******************************************************\n",
    "      *** TODO: write code to extract descriptor at row, col\n",
    "      ******************************************************\n",
    "      \"\"\"\n",
    "\n",
    "\n",
    "      \"\"\"\n",
    "      ******************************************************\n",
    "      \"\"\"\n",
    "\n",
    "      descriptors[i, :]=np.reshape(patch,num_dims)\n",
    "\n",
    "    # normalise descriptors to 0 mean, unit length\n",
    "    mn=np.mean(descriptors,1,keepdims=True)\n",
    "    sd=np.std(descriptors,1,keepdims=True)\n",
    "    small_val = 1e-6\n",
    "    descriptors = (descriptors-mn)/(sd+small_val)\n",
    "\n",
    "    return descriptors\n",
    "\n",
    "  def compute_distances(self, desc1, desc2):\n",
    "    \"\"\"\n",
    "    Compute distances between descriptors\n",
    "\n",
    "    Inputs: desc1=descriptor array (N1, num_dims)\n",
    "            desc2=descriptor array (N2, num_dims)\n",
    "\n",
    "    Returns: dists=array of distances (N1,N2)\n",
    "    \"\"\"\n",
    "    N1,num_dims=desc1.shape\n",
    "    N2,num_dims=desc2.shape\n",
    "\n",
    "    ATB=np.dot(desc1,desc2.T)\n",
    "    AA=np.sum(desc1*desc1,1)\n",
    "    BB=np.sum(desc2*desc2,1)\n",
    "\n",
    "    dists=-2*ATB+np.expand_dims(AA,1)+BB\n",
    "\n",
    "    return dists\n",
    "\n",
    "  def match_descriptors(self, desc1, desc2):\n",
    "    \"\"\"\n",
    "    Find nearest neighbour matches between descriptors\n",
    "\n",
    "    Inputs: desc1=descriptor array (N1, num_dims)\n",
    "            desc2=descriptor array (N2, num_dims)\n",
    "\n",
    "    Returns: match_idx=nearest neighbour index for each desc1 (N1)\n",
    "    \"\"\"\n",
    "    dists=self.compute_distances(desc1, desc2)\n",
    "\n",
    "    match_idx=np.argmin(dists,1)\n",
    "\n",
    "    return match_idx\n",
    "\n",
    "  def match_ratio_test(self, desc1, desc2):\n",
    "    \"\"\"\n",
    "    Find nearest neighbour matches between descriptors\n",
    "    and perform ratio test\n",
    "\n",
    "    Inputs: desc1=descriptor array (N1, num_dims)\n",
    "            desc2=descriptor array (N2, num_dims)\n",
    "\n",
    "    Returns: match_idx=nearest neighbour inded for each desc1 (N1)\n",
    "             ratio_pass=whether each match passes ratio test (N1)\n",
    "    \"\"\"\n",
    "    N1,num_dims=desc1.shape\n",
    "\n",
    "    dists=self.compute_distances(desc1, desc2)\n",
    "\n",
    "    sort_idx=np.argsort(dists,1)\n",
    "\n",
    "    #match_idx=np.argmin(dists,1)\n",
    "    match_idx=sort_idx[:,0]\n",
    "\n",
    "    d1NN=dists[np.arange(0,N1),sort_idx[:,0]]\n",
    "    d2NN=dists[np.arange(0,N1),sort_idx[:,1]]\n",
    "\n",
    "    ratio_threshold=self.params['ratio_threshold']\n",
    "    ratio_pass=(d1NN<ratio_threshold*d2NN)\n",
    "\n",
    "    return match_idx,ratio_pass\n",
    "\n",
    "def draw_interest_points_ax(ip, ax):\n",
    "  \"\"\"\n",
    "  Draw interest points ip on axis ax\n",
    "  \"\"\"\n",
    "  for row,col in zip(ip[0,:],ip[1,:]):\n",
    "    circ1 = Circle((col,row), 5)\n",
    "    circ1.set_color('black')\n",
    "    circ2 = Circle((col,row), 3)\n",
    "    circ2.set_color('white')\n",
    "    ax.add_patch(circ1)\n",
    "    ax.add_patch(circ2)\n",
    "\n",
    "def draw_interest_points_file(im, ip, filename):\n",
    "  \"\"\"\n",
    "  Draw interest points ip on image im and save to filename\n",
    "  \"\"\"\n",
    "  fig,ax = im_util.image_figure(im)\n",
    "  draw_interest_points_ax(ip, ax)\n",
    "  fig.savefig(filename)\n",
    "  plt.close(fig)\n",
    "\n",
    "def draw_matches_ax(ip1, ipm, ax1, ax2):\n",
    "  \"\"\"\n",
    "  Draw matches ip1, ipm on axes ax1, ax2\n",
    "  \"\"\"\n",
    "  for r1,c1,r2,c2 in zip(ip1[0,:], ip1[1,:], ipm[0,:], ipm[1,:]):\n",
    "    rand_colour=np.random.rand(3,)\n",
    "\n",
    "    circ1 = Circle((c1,r1), 5)\n",
    "    circ1.set_color('black')\n",
    "    circ2 = Circle((c1,r1), 3)\n",
    "    circ2.set_color(rand_colour)\n",
    "    ax1.add_patch(circ1)\n",
    "    ax1.add_patch(circ2)\n",
    "\n",
    "    circ3 = Circle((c2,r2), 5)\n",
    "    circ3.set_color('black')\n",
    "    circ4 = Circle((c2,r2), 3)\n",
    "    circ4.set_color(rand_colour)\n",
    "    ax2.add_patch(circ3)\n",
    "    ax2.add_patch(circ4)\n",
    "\n",
    "def draw_matches_file(im1, im2, ip1, ipm, filename):\n",
    "  \"\"\"\n",
    "  Draw matches ip1, ipm on images im1, im2 and save to filename\n",
    "  \"\"\"\n",
    "  H1,W1,B1=im1.shape\n",
    "  H2,W2,B2=im2.shape\n",
    "\n",
    "  im3 = np.zeros((max(H1,H2),W1+W2,3))\n",
    "  im3[0:H1,0:W1,:]=im1\n",
    "  im3[0:H2,W1:(W1+W2),:]=im2\n",
    "\n",
    "  fig,ax = im_util.image_figure(im3)\n",
    "  col_offset=W1\n",
    "\n",
    "  for r1,c1,r2,c2 in zip(ip1[0,:], ip1[1,:], ipm[0,:], ipm[1,:]):\n",
    "    rand_colour=np.random.rand(3,)\n",
    "\n",
    "    circ1 = Circle((c1,r1), 5)\n",
    "    circ1.set_color('black')\n",
    "    circ2 = Circle((c1,r1), 3)\n",
    "    circ2.set_color(rand_colour)\n",
    "    ax.add_patch(circ1)\n",
    "    ax.add_patch(circ2)\n",
    "\n",
    "    circ3 = Circle((c2+col_offset,r2), 5)\n",
    "    circ3.set_color('black')\n",
    "    circ4 = Circle((c2+col_offset,r2), 3)\n",
    "    circ4.set_color(rand_colour)\n",
    "    ax.add_patch(circ3)\n",
    "    ax.add_patch(circ4)\n",
    "\n",
    "  fig.savefig(filename)\n",
    "  plt.close(fig)\n",
    "\n",
    "def plot_descriptors(desc,plt):\n",
    "  \"\"\"\n",
    "  Plot a random set of descriptor patches\n",
    "  \"\"\"\n",
    "  num_ip,num_dims = desc.shape\n",
    "  patch_size = int(np.sqrt(num_dims))\n",
    "\n",
    "  N1,N2=2,8\n",
    "  figsize0=plt.rcParams['figure.figsize']\n",
    "  plt.rcParams['figure.figsize'] = (16.0, 4.0)\n",
    "  for i in range(N1):\n",
    "    for j in range(N2):\n",
    "      ax=plt.subplot(N1,N2,i*N2+j+1)\n",
    "      rnd=np.random.randint(0,num_ip)\n",
    "      desc_im=np.reshape(desc[rnd,:],(patch_size,patch_size))\n",
    "      plt.imshow(im_util.grey_to_rgb(im_util.normalise_01(desc_im)))\n",
    "      plt.axis('off')\n",
    "\n",
    "  plt.rcParams['figure.figsize']=figsize0\n",
    "\n",
    "def plot_matching_descriptors(desc1,desc2,desc1_id,desc2_id,plt):\n",
    "  \"\"\"\n",
    "  Plot a random set of matching descriptor patches\n",
    "  \"\"\"\n",
    "  num_inliers=desc1_id.size\n",
    "  num_ip,num_dims = desc1.shape\n",
    "  patch_size=int(np.sqrt(num_dims))\n",
    "\n",
    "  figsize0=plt.rcParams['figure.figsize']\n",
    "\n",
    "  N1,N2=1,8\n",
    "  plt.rcParams['figure.figsize'] = (16.0, N1*4.0)\n",
    "\n",
    "  for i in range(N1):\n",
    "    for j in range(N2):\n",
    "      rnd=np.random.randint(0,num_inliers)\n",
    "\n",
    "      desc1_rnd=desc1_id[rnd]\n",
    "      desc2_rnd=desc2_id[rnd]\n",
    "\n",
    "      desc1_im=np.reshape(desc1[desc1_rnd,:],(patch_size,patch_size))\n",
    "      desc2_im=np.reshape(desc2[desc2_rnd,:],(patch_size,patch_size))\n",
    "\n",
    "      ax=plt.subplot(2*N1,N2,2*i*N2+j+1)\n",
    "      plt.imshow(im_util.grey_to_rgb(im_util.normalise_01(desc1_im)))\n",
    "      plt.axis('off')\n",
    "      ax=plt.subplot(2*N1,N2,2*i*N2+N2+j+1)\n",
    "      plt.imshow(im_util.grey_to_rgb(im_util.normalise_01(desc2_im)))\n",
    "      plt.axis('off')\n",
    "\n",
    "  plt.rcParams['figure.figsize'] = figsize0\n",
    "\n",
    "# allow accessing these functions by interest_point.*\n",
    "interest_point=types.SimpleNamespace()\n",
    "interest_point.InterestPointExtractor=InterestPointExtractor\n",
    "interest_point.DescriptorExtractor=DescriptorExtractor\n",
    "interest_point.draw_interest_points_ax=draw_interest_points_ax\n",
    "interest_point.draw_interest_points_file=draw_interest_points_file\n",
    "interest_point.draw_matches_ax=draw_matches_ax\n",
    "interest_point.draw_matches_file=draw_matches_file\n",
    "interest_point.plot_descriptors=plot_descriptors\n",
    "interest_point.plot_matching_descriptors=plot_matching_descriptors"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### ransac.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2017 Google Inc.\n",
    "\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import os.path\n",
    "import numpy as np\n",
    "from time import time\n",
    "\n",
    "#import im_util\n",
    "#import interest_point\n",
    "#import geometry\n",
    "\n",
    "class RANSAC:\n",
    "  \"\"\"\n",
    "  Find 2-view consistent matches using RANSAC\n",
    "  \"\"\"\n",
    "  def __init__(self):\n",
    "    self.params={}\n",
    "    self.params['num_iterations']=500\n",
    "    self.params['inlier_dist']=10\n",
    "    self.params['min_sample_dist']=2\n",
    "\n",
    "  def consistent(self, H, p1, p2):\n",
    "    \"\"\"\n",
    "    Find interest points that are consistent with 2D transform H\n",
    "\n",
    "    Inputs: H=homography matrix (3,3)\n",
    "            p1,p2=corresponding points in images 1,2 of shape (2, N)\n",
    "\n",
    "    Outputs: cons=list of inliers indicated by true/false (num_points)\n",
    "\n",
    "    Assumes that H maps from 1 to 2, i.e., hom(p2) ~= H hom(p1)\n",
    "    \"\"\"\n",
    "\n",
    "    cons = np.zeros((p1.shape[1]))\n",
    "    inlier_dist = self.params['inlier_dist']\n",
    "\n",
    "    \"\"\"\n",
    "    ************************************************\n",
    "    *** TODO: write code to check consistency with H\n",
    "    ************************************************\n",
    "    \"\"\"\n",
    "\n",
    "\n",
    "    \"\"\"\n",
    "    ************************************************\n",
    "    \"\"\"\n",
    "\n",
    "    return cons\n",
    "\n",
    "  def compute_similarity(self,p1,p2):\n",
    "    \"\"\"\n",
    "    Compute similarity transform between pairs of points\n",
    "\n",
    "    Input: p1,p2=arrays of coordinates (2, 2)\n",
    "\n",
    "    Output: Similarity matrix S (3, 3)\n",
    "\n",
    "    Assume S maps from 1 to 2, i.e., hom(p2) = S hom(p1)\n",
    "    \"\"\"\n",
    "\n",
    "    S = np.eye(3,3)\n",
    "\n",
    "    \"\"\"\n",
    "    ****************************************************\n",
    "    *** TODO: write code to compute similarity transform\n",
    "    ****************************************************\n",
    "    \"\"\"\n",
    "\n",
    "\n",
    "    \"\"\"\n",
    "    ****************************************************\n",
    "    \"\"\"\n",
    "\n",
    "    return S\n",
    "\n",
    "  def ransac_similarity(self, ip1, ipm):\n",
    "    \"\"\"\n",
    "    Find 2-view consistent matches under a Similarity transform\n",
    "\n",
    "    Inputs: ip1=interest points (2, num_points)\n",
    "            ipm=matching interest points (2, num_points)\n",
    "            ip[0,:]=row coordinates, ip[1, :]=column coordinates\n",
    "\n",
    "    Outputs: S_best=Similarity matrix (3,3)\n",
    "             inliers_best=list of inliers indicated by true/false (num_points)\n",
    "    \"\"\"\n",
    "    S_best=np.eye(3,3)\n",
    "    inliers_best=[]\n",
    "\n",
    "    \"\"\"\n",
    "    *****************************************************\n",
    "    *** TODO: use ransac to find a similarity transform S\n",
    "    *****************************************************\n",
    "    \"\"\"\n",
    "\n",
    "\n",
    "    \"\"\"\n",
    "    *****************************************************\n",
    "    \"\"\"\n",
    "\n",
    "    return S_best, inliers_best\n",
    "\n",
    "# allow accessing these functions by ransac.*\n",
    "ransac=types.SimpleNamespace()\n",
    "ransac.RANSAC=RANSAC"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### match.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2017 Google Inc.\n",
    "\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import os.path\n",
    "from time import time\n",
    "import numpy as np\n",
    "\n",
    "#import interest_point\n",
    "#import ransac\n",
    "\n",
    "class ImageMatcher:\n",
    "  \"\"\"\n",
    "  Find geometrically consistent matches in a set of images\n",
    "  \"\"\"\n",
    "  def __init__(self, params={}):\n",
    "    self.params=params\n",
    "    self.params.setdefault('draw_interest_points', False)\n",
    "    self.params.setdefault('draw_matches', False)\n",
    "    self.params.setdefault('results_dir', os.path.expanduser('~/results/tmp/'))\n",
    "\n",
    "  def match_images(self, images):\n",
    "    \"\"\"\n",
    "    Find geometrically consistent matches between images\n",
    "    \"\"\"\n",
    "    # extract interest points and descriptors\n",
    "    print('[ find interest points ]')\n",
    "    t0=time()\n",
    "    interest_points=[]\n",
    "    descriptors=[]\n",
    "    ip_ex = interest_point.InterestPointExtractor()\n",
    "    desc_ex = interest_point.DescriptorExtractor()\n",
    "    num_images = len(images)\n",
    "\n",
    "    for i in range(num_images):\n",
    "      im = images[i]\n",
    "      img = np.mean(im, 2, keepdims=True)\n",
    "      ip = ip_ex.find_interest_points(img)\n",
    "      print(' found '+str(ip.shape[1])+' interest points')\n",
    "      interest_points.append(ip)\n",
    "      desc = desc_ex.get_descriptors(img, ip)\n",
    "      descriptors.append(desc)\n",
    "      if (self.params['draw_interest_points']):\n",
    "        interest_point.draw_interest_points_file(im, ip, self.params['results_dir']+'/ip'+str(i)+'.jpg')\n",
    "\n",
    "    t1=time()\n",
    "    print(' % .2f secs ' % (t1-t0))\n",
    "\n",
    "    # match descriptors and perform ransac\n",
    "    print('[ match descriptors ]')\n",
    "    matches = [[None]*num_images for _ in range(num_images)]\n",
    "    num_matches = np.zeros((num_images, num_images))\n",
    "\n",
    "    t0=time()\n",
    "    rn = ransac.RANSAC()\n",
    "\n",
    "    for i in range(num_images):\n",
    "      ipi = interest_points[i]\n",
    "      print(' image '+str(i))\n",
    "      for j in range(num_images):\n",
    "        if (i==j):\n",
    "          continue\n",
    "\n",
    "        matchesij = desc_ex.match_descriptors(descriptors[i],descriptors[j])\n",
    "        ipm = interest_points[j][:, matchesij]\n",
    "        S, inliers = rn.ransac_similarity(ipi, ipm)\n",
    "        num_matches[i,j]=np.sum(inliers)\n",
    "        ipic=ipi[:, inliers]\n",
    "        ipmc=ipm[:, inliers]\n",
    "        matches[i][j]=np.concatenate((ipic,ipmc),0)\n",
    "\n",
    "        if (self.params['draw_matches']):\n",
    "          imi = images[i]\n",
    "          imj = images[j]\n",
    "          interest_point.draw_matches_file(imi, imj, ipi, ipm, self.params['results_dir']+'/match_raw_'+str(i)+str(j)+'.jpg')\n",
    "          interest_point.draw_matches_file(imi, imj, ipic, ipmc, self.params['results_dir']+'/match_'+str(i)+str(j)+'.jpg')\n",
    "\n",
    "    t1=time()\n",
    "    print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "    return matches, num_matches\n",
    "\n",
    "# allow accessing these functions by match.*\n",
    "match=types.SimpleNamespace()\n",
    "match.ImageMatcher=ImageMatcher"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### geometry.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2017 Google Inc.\n",
    "\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import numpy as np\n",
    "\n",
    "def compute_rotation(ip1, ip2, K1, K2):\n",
    "  \"\"\"\n",
    "  Find rotation matrix R such that |r2 - R*r1|^2 is minimised\n",
    "\n",
    "  Inputs: ip1,ip2=corresponding interest points (2,num_points),\n",
    "          K1,K2=camera calibration matrices (3,3)\n",
    "\n",
    "  Outputs: R=rotation matrix R (3,3)\n",
    "\n",
    "  r1,r2 are corresponding rays (unit normalised camera coordinates) in image 1,2\n",
    "  \"\"\"\n",
    "\n",
    "  R=H=np.eye(3,3)\n",
    "\n",
    "  \"\"\"\n",
    "  **********************************************************************\n",
    "  *** TODO: write code to compute 3D rotation between corresponding rays\n",
    "  **********************************************************************\n",
    "  \"\"\"\n",
    "\n",
    "\n",
    "  \"\"\"\n",
    "  **********************************************************************\n",
    "  \"\"\"\n",
    "  return R, H\n",
    "\n",
    "def get_calibration(imshape, fov_degrees):\n",
    "  \"\"\"\n",
    "  Return calibration matrix K given image shape and field of view\n",
    "\n",
    "  See note on calibration matrix in documentation of K(f, H, W)\n",
    "  \"\"\"\n",
    "  H, W, _ = imshape\n",
    "  f = max(H,W)/(2*np.tan((fov_degrees/2)*np.pi/180))\n",
    "  K1 = K(f,H,W)\n",
    "  return K1\n",
    "\n",
    "def K(f,H,W):\n",
    "  \"\"\"\n",
    "  Return camera calibration matrix given focal length and image size\n",
    "\n",
    "  Inputs: f=focal length, H=image height, W=image width all in pixels\n",
    "\n",
    "  Outputs: K=calibration matrix (3, 3)\n",
    "\n",
    "  The calibration matrix maps camera coordinates [X,Y,Z] to homogeneous image\n",
    "  coordinates ~[row,col,1]. X is assumed to point along the positive col direction,\n",
    "  i.e., incrementing X increments the col dimension in the image\n",
    "  \"\"\"\n",
    "  K1=np.zeros((3,3))\n",
    "  K1[0,1]=K1[1,0]=f\n",
    "  K1[0,2]=H/2\n",
    "  K1[1,2]=W/2\n",
    "  K1[2,2]=1\n",
    "  return K1\n",
    "\n",
    "def hom(p):\n",
    "  \"\"\"\n",
    "  Convert points to homogeneous coordiantes\n",
    "  \"\"\"\n",
    "  ph=np.concatenate((p,np.ones((1,p.shape[1]))))\n",
    "  return ph\n",
    "\n",
    "def unhom(ph):\n",
    "  \"\"\"\n",
    "  Convert points from homogeneous to regular coordinates\n",
    "  \"\"\"\n",
    "  p=ph/ph[2,:]\n",
    "  p=p[0:2,:]\n",
    "  return p\n",
    "\n",
    "# allow accessing these functions by geometry.*\n",
    "geometry=types.SimpleNamespace()\n",
    "geometry.compute_rotation=compute_rotation\n",
    "geometry.get_calibration=get_calibration\n",
    "geometry.K=K\n",
    "geometry.hom=hom\n",
    "geometry.unhom=unhom"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### render.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2017 Google Inc.\n",
    "\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import numpy as np\n",
    "import skimage.transform\n",
    "from scipy.ndimage import map_coordinates\n",
    "#import im_util\n",
    "\n",
    "def pairwise_warp(im1,im2,H):\n",
    "  \"\"\"\n",
    "  Warp im1 to im2 coords and vice versa\n",
    "  \"\"\"\n",
    "  # skimage transforms assume c,r rather than r,c\n",
    "  P=np.zeros((3,3))\n",
    "  P[0,1]=P[1,0]=P[2,2]=1\n",
    "\n",
    "  HP=np.dot(np.dot(P,H),P)\n",
    "  HPinv=np.linalg.inv(HP)\n",
    "\n",
    "  im1_w = skimage.transform.warp(im1,skimage.transform.ProjectiveTransform(HPinv))\n",
    "  im2_w = skimage.transform.warp(im2,skimage.transform.ProjectiveTransform(HP))\n",
    "\n",
    "  return im1_w, im2_w\n",
    "\n",
    "def pairwise_warp_file(im1,im2,H,results_prefix):\n",
    "  \"\"\"\n",
    "  Warp im1 to im2 coords and vice versa\n",
    "  \"\"\"\n",
    "  im1_w,im2_w = pairwise_warp(im1,im2,H)\n",
    "  im_util.image_save(0.5*(im1+im2_w), results_prefix+'_im1.jpg')\n",
    "  im_util.image_save(0.5*(im2+im1_w), results_prefix+'_im2.jpg')\n",
    "\n",
    "def render_spherical(images, P_matrices, params={}):\n",
    "  \"\"\"\n",
    "  Render images with given projection matrices in spherical coordinates\n",
    "  \"\"\"\n",
    "  params.setdefault('theta_min', -45)\n",
    "  params.setdefault('theta_max', 45)\n",
    "  params.setdefault('phi_min', -30)\n",
    "  params.setdefault('phi_max', 30)\n",
    "  params.setdefault('render_width', 800)\n",
    "\n",
    "  theta_min=params['theta_min'] * np.pi/180\n",
    "  theta_max=params['theta_max'] * np.pi/180\n",
    "  phi_min=params['phi_min'] * np.pi/180\n",
    "  phi_max=params['phi_max'] * np.pi/180\n",
    "\n",
    "  render_width=params['render_width']\n",
    "  render_height=int(render_width*(phi_max-phi_min)/(theta_max-theta_min))\n",
    "\n",
    "  world_coords=np.zeros((render_height, render_width, 3))\n",
    "\n",
    "  theta=np.linspace(theta_min, theta_max, render_width)\n",
    "  phi=np.linspace(phi_max, phi_min, render_height)\n",
    "\n",
    "  cos_phi=np.expand_dims(np.cos(phi),1)\n",
    "  sin_phi=np.expand_dims(np.sin(phi),1)\n",
    "  cos_theta=np.expand_dims(np.cos(theta),0)\n",
    "  sin_theta=np.expand_dims(np.sin(theta),0)\n",
    "\n",
    "  X=np.dot(cos_phi, sin_theta)\n",
    "  Y=-np.dot(sin_phi, np.ones((1,render_width)))\n",
    "  Z=np.dot(cos_phi, cos_theta)\n",
    "\n",
    "  world_coords[:, :, 0]=X\n",
    "  world_coords[:, :, 1]=Y\n",
    "  world_coords[:, :, 2]=Z\n",
    "  wc=np.expand_dims(world_coords,2)\n",
    "\n",
    "  pano_im=np.zeros((render_height, render_width, 4))\n",
    "  im_coords=np.zeros((3,render_height, render_width,4))\n",
    "\n",
    "  for im,P in zip(images,P_matrices):\n",
    "    # compute coordinates u ~ P [X Y Z]\n",
    "    uh=np.dot(wc,P.T)\n",
    "    uh=uh[:, :, 0, :]\n",
    "    uh=uh/np.expand_dims(uh[:, :, 2],2)\n",
    "\n",
    "    for b in range(4):\n",
    "      im_coords[0,:,:,b]=uh[:, :, 0]\n",
    "      im_coords[1,:,:,b]=uh[:, :, 1]\n",
    "      im_coords[2,:,:,b]=b\n",
    "\n",
    "    # add alpha channel\n",
    "    H,W,_=im.shape\n",
    "    ima=np.concatenate((im,np.ones((H,W,1))),2)\n",
    "    pano_im += map_coordinates(ima, im_coords, order=1)\n",
    "\n",
    "  pano_im=pano_im / np.expand_dims((pano_im[:,:,3]+1e-6),2)\n",
    "  pano_im = pano_im[:,:,0:3]\n",
    "\n",
    "  return pano_im\n",
    "\n",
    "# allow accessing these functions by render.*\n",
    "render=types.SimpleNamespace()\n",
    "render.pairwise_warp=pairwise_warp\n",
    "render.pairwise_warp_file=pairwise_warp_file\n",
    "render.render_spherical=render_spherical"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### panorama.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2017 Google Inc.\n",
    "\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import os.path\n",
    "from time import time\n",
    "import numpy as np\n",
    "\n",
    "#import im_util\n",
    "#import match\n",
    "#import geometry\n",
    "#import ransac\n",
    "#import render\n",
    "\n",
    "class PanoramaStitcher:\n",
    "  \"\"\"\n",
    "  Stitch a panorama from input images\n",
    "  \"\"\"\n",
    "  def __init__(self, images, params={}):\n",
    "    self.images = images\n",
    "    num_images = len(images)\n",
    "    self.matches = [[None]*num_images for _ in range(num_images)]\n",
    "    self.num_matches = np.zeros((num_images, num_images))\n",
    "    self.stitch_order = []\n",
    "    self.R_matrices = [None]*num_images\n",
    "    self.params = params\n",
    "    self.params.setdefault('fov_degrees', 45)\n",
    "    self.params.setdefault('draw_interest_points', False)\n",
    "    self.params.setdefault('draw_matches', False)\n",
    "    self.params.setdefault('draw_pairwise_warp', False)\n",
    "    self.params.setdefault('results_dir', os.path.expanduser('~/results/tmp/'))\n",
    "\n",
    "  def stitch(self):\n",
    "    \"\"\"\n",
    "    Match images and perform alignment\n",
    "    \"\"\"\n",
    "    self.match_images()\n",
    "    self.align_panorama()\n",
    "\n",
    "  def match_images(self):\n",
    "    \"\"\"\n",
    "    Match images\n",
    "    \"\"\"\n",
    "    im=match.ImageMatcher(self.params)\n",
    "    self.matches, self.num_matches = im.match_images(self.images)\n",
    "\n",
    "  def align_panorama(self):\n",
    "    \"\"\"\n",
    "    Perform global alignment\n",
    "    \"\"\"\n",
    "    # FORNOW identity rotations\n",
    "    num_images = len(self.images)\n",
    "    for i in range(num_images):\n",
    "      self.R_matrices[i]=np.eye(3,3)\n",
    "\n",
    "    \"\"\"\n",
    "    ***************************************************************\n",
    "    *** TODO write code to compute a global rotation for each image\n",
    "    ***************************************************************\n",
    "    \"\"\"\n",
    "\n",
    "\n",
    "    \"\"\"\n",
    "    ***************************************************************\n",
    "    \"\"\"\n",
    "\n",
    "  def render(self, render_params={}):\n",
    "    \"\"\"\n",
    "    Render output panorama\n",
    "    \"\"\"\n",
    "    print('[ render panorama ]')\n",
    "    t0=time()\n",
    "    P_matrices = self.get_projection_matrices()\n",
    "    pano_im=render.render_spherical(self.images, P_matrices, render_params)\n",
    "    t1=time()\n",
    "    print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "    return pano_im\n",
    "\n",
    "  def get_projection_matrices(self):\n",
    "    \"\"\"\n",
    "    Return projection matrices P such that u~=PX\n",
    "    \"\"\"\n",
    "    num_images = self.num_matches.shape[0]\n",
    "    P_matrices = [None]*num_images\n",
    "    for i in range(num_images):\n",
    "      Ki = geometry.get_calibration(self.images[i].shape, self.params['fov_degrees'])\n",
    "      P_matrices[i] = np.dot(Ki, self.R_matrices[i])\n",
    "\n",
    "    return P_matrices\n",
    "\n",
    "\n",
    "# allow accessing these functions by panorama.*\n",
    "panorama=types.SimpleNamespace()\n",
    "panorama.PanoramaStitcher=PanoramaStitcher"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Image Warping Test\n",
    "\n",
    "The code below warps an image using a 3x3 transformation matrix. Experiment with the matrix P to test some of the different 2D transformations described in class, e.g., similarity, affine and projective transforms."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# read image\n",
    "image_dir='data/test'\n",
    "im_filename1=image_dir+'/100-0038_img.jpg'\n",
    "im=im_util.image_open(im_filename1)\n",
    "im_rows,im_cols,_=im.shape\n",
    "\n",
    "# set transformation matrix\n",
    "P=[[1, 0.2, -64],\n",
    "  [ 0, 1.1, -120],\n",
    "  [ 0, 5.2e-4, 0.83]]\n",
    "\n",
    "# warp coordinates\n",
    "r0,r1=-im_rows/2, im_rows*3/2\n",
    "c0,c1=-im_cols/2, im_cols*3/2\n",
    "warp_rows, warp_cols=im_rows, im_cols\n",
    "\n",
    "coords=im_util.coordinate_image(warp_rows,warp_cols,r0,r1,c0,c1)\n",
    "coords_t=im_util.transform_coordinates(coords, P)\n",
    "\n",
    "# visualise result\n",
    "warp_im1=im_util.warp_image(im,coords)\n",
    "warp_im2=im_util.warp_image(im,coords_t)\n",
    "alpha=im_util.warp_image(np.ones((im_rows,im_cols,1)),coords_t)\n",
    "result_im=warp_im2*alpha + 0.5*warp_im1*(1-alpha)\n",
    "\n",
    "ax1=plt.subplot(1,1,1)\n",
    "plt.imshow(result_im)\n",
    "plt.axis('off')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interest Points Test\n",
    "\n",
    "We will use the interest points and descriptors implemented in [Project 1](https://courses.cs.washington.edu/courses/csep576/18sp/projects/Project1.html \"Project 1\"). If you had trouble getting these to work, or want to test another matching algorithm, you could try vlfeat/sift, see below. Note you'll need to install cyvlfeat, e.g., conda install -c menpo cyvlfeat.\n",
    "\n",
    "`from cyvlfeat import sift \n",
    "frames,desc=sift.sift(img,compute_descriptor=True,n_levels=1)\n",
    "ip=(frames.T)[0:2,:]\n",
    "desc=desc.astype(np.float)`\n",
    "\n",
    "Run the two code blocks below to check your interest points and descriptors are working. For subsequent steps to run well, you should aim for about 100-1000 interest points."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Read a pair of input images and convert to grey\n",
    "\"\"\"\n",
    "image_dir='data/test'\n",
    "#im_filename1=image_dir+'/100-0023_img.jpg'\n",
    "#im_filename2=image_dir+'/100-0024_img.jpg'\n",
    "im_filename1=image_dir+'/100-0038_img.jpg'\n",
    "im_filename2=image_dir+'/100-0039_img.jpg'\n",
    "\n",
    "im1 = im_util.image_open(im_filename1)\n",
    "im2 = im_util.image_open(im_filename2)\n",
    "\n",
    "img1 = np.mean(im1, 2, keepdims=True)\n",
    "img2 = np.mean(im2, 2, keepdims=True)\n",
    "\n",
    "#optionally plot images\n",
    "#ax1,ax2=im_util.plot_two_images(im1, im2)\n",
    "\n",
    "\"\"\"\n",
    "Find interest points in the image pair\n",
    "\"\"\"\n",
    "print('[ find interest points ]')\n",
    "t0=time()\n",
    "ip_ex = interest_point.InterestPointExtractor()\n",
    "ip1 = ip_ex.find_interest_points(img1)\n",
    "print(' found '+str(ip1.shape[1])+' in image 1')\n",
    "ip2 = ip_ex.find_interest_points(img2)\n",
    "print(' found '+str(ip2.shape[1])+' in image 2')\n",
    "t1=time()\n",
    "print(' % .2f secs ' % (t1-t0))\n",
    "\n",
    "# optionally draw interest points\n",
    "#print('[ drawing interest points ]')\n",
    "#ax1,ax2=im_util.plot_two_images(im1,im2)\n",
    "#interest_point.draw_interest_points_ax(ip1, ax1)\n",
    "#interest_point.draw_interest_points_ax(ip2, ax2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Extract and match descriptors\n",
    "\"\"\"\n",
    "print('[ extract descriptors ]')\n",
    "t0=time()\n",
    "desc_ex = interest_point.DescriptorExtractor()\n",
    "desc1 = desc_ex.get_descriptors(img1, ip1)\n",
    "desc2 = desc_ex.get_descriptors(img2, ip2)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "print('[ match descriptors ]')\n",
    "t0=time()\n",
    "match_idx = desc_ex.match_descriptors(desc1, desc2)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "ipm=ip2[:,match_idx]\n",
    "\n",
    "print('[ drawing matches ]')\n",
    "t0=time()\n",
    "ax1,ax2=im_util.plot_two_images(im1,im2)\n",
    "interest_point.draw_matches_ax(ip1, ipm, ax1, ax2)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## RANSAC Implementation\n",
    "\n",
    "We will now use RANSAC to find consistent matches.\n",
    "\n",
    "### Consistency Test [15%]\n",
    "\n",
    "First we will implement a test to count the number of matches consistent with a Similarity transform. The code below generates a random Similarity transform S and a random set of points x. It then transforms the points and adds noise, and checks to see how many of these points are consistent with the ground truth transformation S.\n",
    "\n",
    "Open `ransac.py` and implement the function `consistent`. You should find a high fraction (~80% or more) points are consistent with the true Similarity transform S when running the code below.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Test RANSAC functions using synthetic data\n",
    "\"\"\"\n",
    "# make a random S matrix\n",
    "sd_pos=100\n",
    "sd_angle=np.pi\n",
    "theta=np.random.randn()*sd_angle\n",
    "tx=np.random.randn()*sd_pos\n",
    "ty=np.random.randn()*sd_pos\n",
    "ct=np.cos(theta)\n",
    "st=np.sin(theta)\n",
    "S=[[ct,st,tx],[-st,ct,ty],[0,0,1]]\n",
    "\n",
    "# generate random points\n",
    "num_points=100\n",
    "sd_points=20\n",
    "x = np.random.randn(2,num_points)*sd_points\n",
    "xh = geometry.hom(x)\n",
    "\n",
    "# transform points and add noise\n",
    "sd_noise=5\n",
    "yh = np.dot(S, xh)\n",
    "y = geometry.unhom(yh)\n",
    "yn = y + np.random.randn(2,num_points)*sd_noise\n",
    "\n",
    "print('[ Test of consistent ]')\n",
    "rn = ransac.RANSAC()\n",
    "inliers0=rn.consistent(S,x,yn)\n",
    "num_consistent=np.sum(inliers0)\n",
    "print(' number of points consistent with true S = '+str(num_consistent))\n",
    "if (num_consistent > 0.75*num_points):\n",
    "    print(' consistency check is working!')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Similarity Solver [20%]\n",
    "\n",
    "Now select a sample of 2 point corresondences and compute the Similarity transform corresponding to this pair. Implement `compute_similarity` in `ransac.py` and run the code below to compute the number of inliers. Try varying the indices of the sample to see how the number of inliers varies. Are there any degenerate cases? How could these be detected?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('[ Test of compute_similarity ]')\n",
    "sample=[0,1]\n",
    "S1=rn.compute_similarity(x[:,sample],yn[:,sample])\n",
    "inliers1=rn.consistent(S1,x,yn)\n",
    "num_consistent=np.sum(inliers1)\n",
    "print(' number of points consistent with sample S = '+str(num_consistent))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### RANSAC Loop [15%]\n",
    "\n",
    "Finally, finish the implementation of RANSAC by completing `ransac_similarity` in `ransac.py`. When completed you should find most of the points are labelled consistent."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('[ Test of ransac_similarity ]')\n",
    "S2, inliers2=rn.ransac_similarity(x, yn)\n",
    "num_consistent=np.sum(inliers2)\n",
    "print(' number of points consistent with ransac S = '+str(num_consistent))\n",
    "if (num_consistent > 0.75*num_points):\n",
    "    print(' ransac succeeded!')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll now move away from our synthetic test data and run the same code on the interest point matches obtained using the input image pair above. Review the code below and check that the output looks reasonable. You should see a good set of geometrically consistent matches."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Perform RANSAC on interest point matches\n",
    "\"\"\"\n",
    "print('[ do ransac ]')\n",
    "t0=time()\n",
    "rn = ransac.RANSAC()\n",
    "S, inliers = rn.ransac_similarity(ip1,ipm)\n",
    "t1=time()\n",
    "num_inliers_s = np.sum(inliers)\n",
    "print(' found '+str(num_inliers_s)+' matches')\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "ip1c = ip1[:, inliers]\n",
    "ipmc = ipm[:, inliers]\n",
    "\n",
    "print('[ drawing matches ]')\n",
    "t0=time()\n",
    "ax1,ax2=im_util.plot_two_images(im1,im2)\n",
    "interest_point.draw_matches_ax(ip1c, ipmc, ax1, ax2)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "# optionally plot descriptors for matched points\n",
    "#inlier_id=np.flatnonzero(inliers)\n",
    "#match_id=match_idx[inlier_id]\n",
    "#interest_point.plot_matching_descriptors(desc1,desc2,inlier_id,match_id,plt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Rotation Estimation [15%]\n",
    "\n",
    "The next task is to estimate the true rotation between the images. To do this, we'll take a guess at the field of view of our input images, and use a closed form algorithm to estimate the rotation. Open `geometry.py` and complete the implementation of `compute_rotation`. You should find that a large number of the matches are consistent with your rotation, and the pairwise warped images should look sensible. Try experimenting with the field of view parameter. What is the best field of view for these images? "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Estimate rotation matrix by least squares\n",
    "\"\"\"\n",
    "print('[ estimate rotation ]')\n",
    "t0=time()\n",
    "# Note: assume field of view of 45 degrees\n",
    "fov_degrees=45\n",
    "print(' assuming fov='+str(fov_degrees))\n",
    "K1 = geometry.get_calibration(im1.shape, fov_degrees)\n",
    "K2 = geometry.get_calibration(im2.shape, fov_degrees)\n",
    "R,H = geometry.compute_rotation(ip1c, ipmc, K1, K2)\n",
    "\n",
    "num_inliers_r = np.sum(rn.consistent(H, ip1, ipm))\n",
    "print(' num consistent with rotation = '+str(num_inliers_r))\n",
    "if (num_inliers_r>0.9 * num_inliers_s):\n",
    "    print(' compute rotation succeeded!')\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "    \n",
    "print('[ test pairwise warp ]')\n",
    "t0=time()\n",
    "im1_w, im2_w = render.pairwise_warp(im1, im2, H)\n",
    "_= im_util.plot_two_images(0.5*(im1+im2_w), 0.5*(im2+im1_w))\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following code renders the aligned images in a spherical coordinate system. Check that the images are well aligned."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Render 2 images in spherical coordinates\n",
    "\"\"\"\n",
    "images=[im1,im2]\n",
    "P1=K1\n",
    "P2=np.dot(K2,R)\n",
    "P_matrices=[P1,P2]\n",
    "\n",
    "render_params={}\n",
    "render_params['render_width']=800\n",
    "render_params['theta_min']=-45\n",
    "render_params['theta_max']=45\n",
    "render_params['phi_min']=-30\n",
    "render_params['phi_max']=30\n",
    "\n",
    "print ('[ render aligned images ]')\n",
    "t0=time()\n",
    "pano_im=render.render_spherical(images, P_matrices, render_params)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "plt.plot()\n",
    "plt.imshow(pano_im)\n",
    "plt.axis('off')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's add more images! The method `PanormaStitcher` class in `panorama.py` takes a set of images as input and wraps the interest point and matching code in the method `match_images`. Take a look at this function and test it on a set of images using the code below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Read a set of input images\n",
    "\"\"\"\n",
    "\n",
    "print('[ read images ]')\n",
    "image_dir='data/test'\n",
    "im_filenames=os.listdir(image_dir)\n",
    "im_filenames=[image_dir+'/'+fname for fname in im_filenames]\n",
    "\n",
    "#im_filenames=[]\n",
    "#im_filenames.append(image_dir+'/100-0023_img.jpg')\n",
    "#im_filenames.append(image_dir+'/100-0024_img.jpg')\n",
    "#im_filenames.append(image_dir+'/100-0038_img.jpg')\n",
    "#im_filenames.append(image_dir+'/100-0039_img.jpg')\n",
    "\n",
    "images=[]\n",
    "for fname in im_filenames:\n",
    "  images.append(im_util.image_open(fname))\n",
    "\n",
    "\"\"\"\n",
    "Stitch images\n",
    "\"\"\"\n",
    "stitch_params={}\n",
    "stitch_params['fov_degrees']=45\n",
    "\n",
    "num_images = len(im_filenames)\n",
    "print(' stitching '+str(num_images)+' images')\n",
    "\n",
    "pano=panorama.PanoramaStitcher(images, stitch_params)\n",
    "pano.match_images()\n",
    "\n",
    "print(' num_matches=')\n",
    "print(pano.num_matches)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Panorama Alignment [15%]\n",
    "\n",
    "Now write code to compute a rotation matrix for each image (the first image is assumed to be the identity rotation) by chaining together pairwise rotations. The code for this should go in `align_panorama` in `panorama.py`.\n",
    "\n",
    "You can now use the `render` method to stich all images in spherical coordinates, as shown in the code below. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pano.align_panorama()\n",
    "\n",
    "render_params={}\n",
    "render_params['render_width']=800\n",
    "render_params['theta_min']=-45\n",
    "render_params['theta_max']=45\n",
    "render_params['phi_min']=-30\n",
    "render_params['phi_max']=30\n",
    "\n",
    "pano_im = pano.render(render_params)\n",
    "\n",
    "plt.plot()\n",
    "plt.imshow(pano_im)\n",
    "plt.axis('off')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Testing and Improving the Panorama Stitcher [20%]\n",
    "\n",
    "You should now have a complete implementation of a basic panorama stitcher. Try it out using a few different image sets and make a note of any issues/artifacts in the results. How could the results be improved? Write a list of possible improvements, and think of new features you might like to add. Now implement some of these improvements / new features and document your work in the notebook below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### TODO your improvements to the panorama stitcher\n",
    "\n",
    "\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
