{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Project 1: 图像特征提取与匹配\n",
    "\n",
    "启动Jupyter并打开`Project1.ipynb`。\n",
    "\n",
    "在本项目中，您将构建一个图像特征匹配器，从简单的卷积运算开始，然后进行兴趣点检测和描述符提取。一旦你有一个基本的功能匹配器工作，尝试一些改进并记录你的结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import os.path\n",
    "from time import time\n",
    "import types\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "import im_util\n",
    "import interest_point\n",
    "\n",
    "%matplotlib inline\n",
    "# edit this line to change the figure size\n",
    "plt.rcParams['figure.figsize'] = (16.0, 10.0)\n",
    "# force auto-reload of import modules before running code \n",
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 卷积和图像滤波 [5分]\n",
    "\n",
    "首先编写代码来执行一维卷积。打开 `im_util.py` 并编辑函数 `convolve_1d`。你应该只使用基本的 numpy 数组操作和循环。现在不用担心效率问题。与参考的 numpy 版本相比，你应该会看到很小的误差。\n",
    "\n",
    "注意，卷积和相关（correlation）在对核（kernel）进行简单操作后是相同的（它是什么操作？）。对于哪些核，卷积和相关的结果是相同的？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Test of convolve_1d\n",
    "\"\"\"\n",
    "print('[ Test convolve_1d ]')\n",
    "x = (np.random.rand(20)>0.8).astype(np.float32)\n",
    "k = np.array([1,3,1])\n",
    "y1 = im_util.convolve_1d(x, k)\n",
    "y2 = np.convolve(x, k, 'same')\n",
    "y3 = np.correlate(x, k, 'same')\n",
    "print(' convolve error = ', np.sum((y1-y2)**2))\n",
    "print(' correlate error = ', np.sum((y1-y3)**2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们现在将使用一维核对二维图像进行卷积。通过对图像的每一行进行卷积，完成 `im_util.py` 中的 `convolve_rows` 函数。运行下面的代码，并检查图像输出是否合理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Test of convolve_image\n",
    "\"\"\"\n",
    "image_filename='data/test/100-0038_img.jpg'\n",
    "\n",
    "print('[ Test convolve_image ]')\n",
    "im = im_util.image_open(image_filename)\n",
    "k = np.array([1,2,3,4,5,6,5,4,3,2,1])\n",
    "print(' convolve_rows')\n",
    "t0=time()\n",
    "im1 = im_util.convolve_rows(im, k)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "print(' scipy convolve')\n",
    "t0=time()\n",
    "im2 = im_util.convolve(im, np.expand_dims(k,0))\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "print(' convolve_image error =', np.sum((im1-im2)**2))\n",
    "\n",
    "# optionally plot images for debugging\n",
    "#im1_norm=im_util.normalise_01(im1)\n",
    "#im2_norm=im_util.normalise_01(im2)\n",
    "#ax1,ax2=im_util.plot_two_images(im1_norm, im2_norm)          "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "你可能会发现 scipy 的卷积运行速度比你的版本快得多。为了加快速度，你可以在所有后续实验中使用这个版本（`im_util.convolve`）。请注意，这将执行一个通用的二维卷积，并将二维核作为输入。\n",
    "\n",
    "现在编写代码来执行高斯模糊。首先实现函数 `gauss_kernel` 来计算一维高斯核。然后完成 `convolve_gaussian` 来执行与这个核的可分离卷积。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Gaussian blurring test\n",
    "\"\"\"\n",
    "print('[ Test convolve_gaussian ]')\n",
    "\n",
    "sigma=4.0\n",
    "k=im_util.gauss_kernel(sigma)\n",
    "print(' gauss kernel = ')\n",
    "print(k)\n",
    "\n",
    "t0=time()\n",
    "im1 = im_util.convolve_gaussian(im, sigma)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "ax1,ax2=im_util.plot_two_images(im, im1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在在函数 `compute_gradients` 中编写代码来计算水平和垂直梯度。使用在每个方向上卷积的显式核（即，不要使用内置函数，如 `numpy.gradient`）。运行下面的代码，并检查输出是否看起来合理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Gradient computation test\n",
    "\"\"\"\n",
    "print('[ Test gradient computation ]')\n",
    "img = np.mean(im,2,keepdims=True)\n",
    "Ix,Iy = im_util.compute_gradients(img)\n",
    "\n",
    "# copy greyvalue to RGB channels\n",
    "Ix_out = im_util.grey_to_rgb(im_util.normalise_01(Ix))\n",
    "Iy_out = im_util.grey_to_rgb(im_util.normalise_01(Iy))\n",
    "\n",
    "im_util.plot_two_images(Ix_out, Iy_out)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 兴趣点提取器 [5分]\n",
    "\n",
    "现在你将使用这些卷积函数来实现一个角点或兴趣点检测器。选择一个知名的检测器，如 Harris 或 DoG，并在 `interest_point.py` 的 `corner_function` 中实现兴趣点强度函数。运行下面的代码来可视化你的角点函数输出。接下来，通过填写同一文件中的 `find_local_maxima`，将这个函数的局部最大值检测为角点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Compute corner strength function\n",
    "\"\"\"\n",
    "print('[ Compute corner strength ]')\n",
    "ip_ex = interest_point.InterestPointExtractor()\n",
    "ip_fun = ip_ex.corner_function(img)\n",
    "\n",
    "# normalise for display\n",
    "[mn,mx]=np.percentile(ip_fun,[5,95])\n",
    "small_val=1e-9\n",
    "ip_fun_norm=(ip_fun-mn)/(mx-mn+small_val)\n",
    "ip_fun_norm=np.maximum(np.minimum(ip_fun_norm,1.0),0.0)\n",
    "\n",
    "\"\"\"\n",
    "Find local maxima of corner strength\n",
    "\"\"\"\n",
    "print('[ Find local maxima ]')\n",
    "row, col = ip_ex.find_local_maxima(ip_fun)\n",
    "ip = np.stack((row,col))\n",
    "\n",
    "ax1,ax2=im_util.plot_two_images(im_util.grey_to_rgb(ip_fun_norm),im)\n",
    "interest_point.draw_interest_points_ax(ip, ax2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 描述符和匹配 [5分]\n",
    "\n",
    "现在让我们匹配我们的兴趣点。首先提取一个非常简单的描述符，它仅仅是兴趣点周围的一块像素区域。为此，请填写 `interest_point.py` 中的 `get_descriptors` 函数。以下代码输出一组随机的标准化描述符补丁。检查输出是否看起来合理。一旦你做到这一点，尝试改变描述符补丁中的采样间距。这里的“采样间距”是指基础图像中像素之间的距离，以像素为单位，与描述符补丁中的像素之间的距离。采样间距大于1像素会存在什么问题？如何解决这个问题？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Extract descriptors\n",
    "\"\"\"\n",
    "print('[ Extract descriptors ]')\n",
    "desc_ex=interest_point.DescriptorExtractor()\n",
    "descriptors=desc_ex.get_descriptors(img, ip)\n",
    "interest_point.plot_descriptors(descriptors,plt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们现在将在一对图像之间匹配描述符。运行以下两个代码块来提取你的兴趣点，并提取并匹配描述符。第二个代码块调用一个函数来执行描述符的最近邻匹配，并使用比率测试进行过滤。查看代码并检查你是否理解它是如何工作的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Read a pair of input images and extract interest points\n",
    "\"\"\"\n",
    "image_dir='data/test'\n",
    "#im_filename1=image_dir+'/100-0023_img.jpg'\n",
    "#im_filename2=image_dir+'/100-0024_img.jpg'\n",
    "im_filename1=image_dir+'/100-0038_img.jpg'\n",
    "im_filename2=image_dir+'/100-0039_img.jpg'\n",
    "\n",
    "im1 = im_util.image_open(im_filename1)\n",
    "im2 = im_util.image_open(im_filename2)\n",
    "\n",
    "img1 = np.mean(im1, 2, keepdims=True)\n",
    "img2 = np.mean(im2, 2, keepdims=True)\n",
    "\n",
    "print('[ find interest points ]')\n",
    "t0=time()\n",
    "ip_ex = interest_point.InterestPointExtractor()\n",
    "ip1 = ip_ex.find_interest_points(img1)\n",
    "print(' found '+str(ip1.shape[1])+' in image 1')\n",
    "ip2 = ip_ex.find_interest_points(img2)\n",
    "print(' found '+str(ip2.shape[1])+' in image 2')\n",
    "t1=time()\n",
    "print(' % .2f secs ' % (t1-t0))\n",
    "\n",
    "print('[ drawing interest points ]')\n",
    "ax1,ax2=im_util.plot_two_images(im1,im2)\n",
    "t0=time()\n",
    "interest_point.draw_interest_points_ax(ip1, ax1)\n",
    "interest_point.draw_interest_points_ax(ip2, ax2)\n",
    "t1=time()\n",
    "print(' % .2f secs ' % (t1-t0))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Extract and match descriptors\n",
    "\"\"\"\n",
    "print('[ extract descriptors ]')\n",
    "t0=time()\n",
    "desc_ex = interest_point.DescriptorExtractor()\n",
    "desc1 = desc_ex.get_descriptors(img1, ip1)\n",
    "desc2 = desc_ex.get_descriptors(img2, ip2)\n",
    "t1=time()\n",
    "print(' % .2f secs' % (t1-t0))\n",
    "\n",
    "# Uncomment the following lines to use SIFT descriptors\n",
    "# Note: you'll need to install cyvlfeat, e.g., conda install -c menpo cyvlfeat\n",
    "\n",
    "#from cyvlfeat import sift \n",
    "#frames1,desc1=sift.sift(img1,compute_descriptor=True,n_levels=1)\n",
    "#frames2,desc2=sift.sift(img2,compute_descriptor=True,n_levels=1)\n",
    "#ip1=(frames1.T)[0:2,:]\n",
    "#ip2=(frames2.T)[0:2,:]\n",
    "#desc1=desc1.astype(np.float)\n",
    "#desc2=desc2.astype(np.float)\n",
    "\n",
    "print('[ match descriptors ]')\n",
    "match_idx,ratio_pass=desc_ex.match_ratio_test(desc1, desc2)\n",
    "num_ratio_pass=np.sum(ratio_pass)\n",
    "\n",
    "ipm=ip2[:,match_idx]\n",
    "\n",
    "ip1r=ip1[:,ratio_pass]\n",
    "ip2r=ipm[:,ratio_pass]\n",
    "\n",
    "N1,num_dims=desc1.shape\n",
    "print(' Number of interest points = '+str(N1))\n",
    "print(' Number of matches passing ratio test = '+str(num_ratio_pass))\n",
    "\n",
    "ax1,ax2=im_util.plot_two_images(im1,im2)\n",
    "interest_point.draw_matches_ax(ip1r, ip2r, ax1, ax2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以下代码可视化了匹配的描述符补丁。你能区分出正确和错误的匹配吗？（重新加载以获得另一个随机样本）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Plot descriptors for matched points\n",
    "\"\"\"\n",
    "interest_point.plot_matching_descriptors(desc1,desc2,np.arange(0,ip1.shape[1]),match_idx,plt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 测试和改进特征匹配 [5分]\n",
    "\n",
    "尝试改变描述符匹配器中的 `ratio_threshold` 参数（`DescriptorExtractor` 类参数）。这个参数的好设置是什么？如果一切正常，你应该会看到一组匹配正确的点（目标是大约100个或更多）。试验你的兴趣点和描述符实现，找出哪些其他参数很重要，并尝试获得一组好的匹配。尝试你自己的新想法来改进兴趣点或描述符，并在下面的笔记本中记录你的发现。确保包括足够的图表/表格和解释来展示你的基础案例和你的修改效果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### TODO experiments with your detector/descriptors\n",
    "\n",
    "\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
