{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 以Lena为原始图像，通过OpenCV实现平均滤波，高斯滤波及中值滤波，比较滤波结果。 \n",
    "2. 以Lena为原始图像，通过OpenCV使用Sobel及Canny算子检测，比较边缘检测结果。 \n",
    "3. 在OpenCV安装目录下找到课程对应演示图片(安装目录\\sources\\samples\\data)，首先计算灰度直方图，进一步使用大津算法进行分割，并比较分析分割结果。 \n",
    "4. 使用米粒图像，分割得到各米粒，首先计算各区域(米粒)的面积、长度等信息，进一步计算面积、长度的均值及方差，分析落在3sigma范围内米粒的数量。 \n",
    "扩展作业： \n",
    "5. 使用棋盘格及自选风景图像，分别使用SIFT、FAST及ORB算子检测角点，并比较分析检测结果。 \n",
    "(可选)使用Harris角点检测算子检测棋盘格，并与上述结果比较。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 以Lena为原始图像，通过OpenCV实现平均滤波，高斯滤波及中值滤波，比较滤波结果。 \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import cv2\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "filename = './lena.jpg'\n",
    "img = cv2.imread(filename)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "# 平均滤波\n",
    "img_mean = cv2.blur(img, (5,5))\n",
    "\n",
    "# 高斯滤波\n",
    "img_Guassian = cv2.GaussianBlur(img,(5,5),0)\n",
    "\n",
    "# 中值滤波\n",
    "img_median = cv2.medianBlur(img, 5)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 展示不同的图片\n",
    "titles = ['srcImg', 'mean', 'Gaussian', 'median']\n",
    "imgs = [img, img_mean, img_Guassian, img_median]\n",
    "\n",
    "for i in range(len(titles)):\n",
    "    cv2.imshow(titles[i], imgs[i])\n",
    "    cv2.waitKey()\n",
    "    cv2.destroyAllWindows()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 平均滤波较为模糊，在眼睛、嘴角、帽子的毛的细节处理上不佳。另外，图片中间右侧有几个黄色噪声点，没有很好的去除\n",
    "- 高斯滤波比起平均滤波、中值滤波，在细节处理上要好。但右侧黄色噪声点不如中值滤波效果好"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 以Lena为原始图像，通过OpenCV使用Sobel及Canny算子检测，比较边缘检测结果。 \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "# 以灰度图像读出来\n",
    "img = cv2.imread(filename,0)\n",
    "\n",
    "# Sobel\n",
    "x = cv2.Sobel(img, cv2.CV_64F, 1,0, ksize=3)\n",
    "y = cv2.Sobel(img, cv2.CV_64F, 0,1, ksize=3)\n",
    "absX = cv2.convertScaleAbs(x)   # 转回uint8\n",
    "absY = cv2.convertScaleAbs(y)\n",
    "img_sobel = cv2.addWeighted(absX,0.5,absY,0.5,0)\n",
    "cv2.imshow(\"absX\", absX)\n",
    "cv2.imshow(\"absY\", absY)\n",
    "# 结果\n",
    "cv2.imshow(\"Result\", img_sobel)\n",
    "cv2.imwrite('lena-sobel.jpg', img_sobel)\n",
    "cv2.waitKey()\n",
    "cv2.destroyAllWindows()\n",
    "\n",
    "# Canny\n",
    "img_canny = cv2.Canny(img, 80, 150)\n",
    "\n",
    "# 结果\n",
    "cv2.imshow(\"Result\", img_canny)\n",
    "cv2.imwrite('lena-canny.jpg', img_canny)\n",
    "cv2.waitKey()\n",
    "cv2.destroyAllWindows()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Sobel算子有很多虚检的边缘，如鼻梁、帽子横向的纹路。在眼睛等细节较多的部位边缘重复较多\n",
    "- canny算子需要设置合适的阈值，我尝试（50,100）和（80,150），后者没有检测出帽子边缘，而前者在眼下区域检测出非边缘"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. 在OpenCV安装目录下找到课程对应演示图片(安装目录\\sources\\samples\\data)，首先计算灰度直方图，进一步使用大津算法进行分割，并比较分析分割结果。 \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import cv2\n",
    "from matplotlib import pyplot as plt\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "filename = './example-pic6.png'\n",
    "img = cv2.imread(filename,0)\n",
    "cv2.imshow(\"pic6\", img)\n",
    "cv2.waitKey()\n",
    "cv2.destroyAllWindows()\n",
    "# print(img.shape)\n",
    "hist = cv2.calcHist([img], [0], None, [256], [0, 255])\n",
    "\n",
    "# 画出直方图\n",
    "plt.figure()\n",
    "plt.title(\"Grayscale Histogram\")\n",
    "plt.xlabel(\"Bins\")\n",
    "plt.ylabel(\"number of Pixels\")\n",
    "plt.plot(hist)\n",
    "plt.xlim([0,256])\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "_, th2 = cv2.threshold(img, 0, 256, cv2.THRESH_OTSU)\n",
    "plt.figure()\n",
    "plt.subplot(221), plt.imshow(img, 'gray')\n",
    "plt.subplot(222), plt.hist(img.ravel(), 256) \n",
    "plt.subplot(223), plt.imshow(th2, 'gray')\n",
    "plt.show()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 大津算法将图像分割为两个区域，其中图像中颜色较深的被分割出来，但是没有表现出完整的区域，左部和下部两个矩形没有被分割出来"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 使用米粒图像，分割得到各米粒，首先计算各区域(米粒)的面积、长度等信息，进一步计算面积、长度的均值及方差，分析落在3sigma范围内米粒的数量。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import cv2\n",
    "import matplotlib as mpl\n",
    "import numpy as np\n",
    "import copy"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "# 计算平均值、中位值、方差\n",
    "def stats(li):\n",
    "    return np.mean(li), np.median(li), np.var(li)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "米粒数量：  95\n",
      "面积——平均数： 577.96, 中位数：629.00, 方差：63705.77\n",
      "面积在3sigma之外的数量： 2\n",
      "周长——平均数： 110.28, 中位数：117.10, 方差：1338.52\n",
      "周长在3sigma之外的数量： 2\n"
     ]
    }
   ],
   "source": [
    "\n",
    "filename = 'rice.png'\n",
    "image = cv2.imread(filename)\n",
    "gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n",
    "_, bw = cv2.threshold(gray, 0, 0xff, cv2.THRESH_OTSU)\n",
    "element = cv2.getStructuringElement(cv2.MORPH_CROSS, (3, 3))\n",
    "bw = cv2.morphologyEx(bw, cv2.MORPH_OPEN, element)\n",
    "\n",
    "seg = copy.deepcopy(bw)\n",
    "bin, cnts, hier = cv2.findContours(\n",
    "    seg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n",
    "count = 0\n",
    "li_area = []\n",
    "li_perimeter = []\n",
    "for i in range(len(cnts), 0, -1):\n",
    "    c = cnts[i - 1]\n",
    "    area = cv2.contourArea(c)\n",
    "    perimeter = cv2.arcLength(c , False)\n",
    "    if area < 10: #去除面积过小的点\n",
    "        continue\n",
    "    count += 1\n",
    "    # print(\"blob_area\", i, ' : ', area)\n",
    "    # print(\"blob_perimeter\", i, ' : ', perimeter)\n",
    "    li_area.append(area)\n",
    "    li_perimeter.append(perimeter)\n",
    "    x, y, w, h = cv2.boundingRect(c)\n",
    "    cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 0xff), 1)\n",
    "    cv2.putText(image, str(count), (x, y),\n",
    "                cv2.FONT_HERSHEY_PLAIN, 0.5, (0, 0xff, 0))\n",
    "print(\"米粒数量： \", count)\n",
    "cv2.imshow(\"origin\", image)\n",
    "cv2.imshow(\"threshold\", bw)\n",
    "cv2.waitKey()\n",
    "cv2.destroyAllWindows()\n",
    "print(\"面积——平均数： %.2f, 中位数：%.2f, 方差：%.2f\"% stats(li_area))\n",
    "mean, _, var = stats(li_area)\n",
    "std = np.sqrt(var)\n",
    "li_area_3sigma = [i for i in li_area if i>(mean+3*std) or i<(mean-3*std) ]\n",
    "print(\"面积在3sigma之外的数量：\", len(li_area_3sigma))\n",
    "print(\"周长——平均数： %.2f, 中位数：%.2f, 方差：%.2f\"% stats(li_perimeter))\n",
    "mean, _, var = stats(li_perimeter)\n",
    "std = np.sqrt(var)\n",
    "li_perimeter_3sigma = [i for i in li_area if i>(mean+3*std) or i<(mean-3*std) ]\n",
    "print(\"周长在3sigma之外的数量：\", len(li_area_3sigma))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. 使用棋盘格及自选风景图像，分别使用SIFT、FAST及ORB算子检测角点，并比较分析检测结果。\n",
    "(可选)使用Harris角点检测算子检测棋盘格，并与上述结果比较。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def fast_detect(image, scale, image_name):\n",
    "    scale_percent = scale  # percent of original size\n",
    "    width = int(image.shape[1] * scale_percent / 100)\n",
    "    height = int(image.shape[0] * scale_percent / 100)\n",
    "    dim = (width, height)\n",
    "    # resize image\n",
    "    image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)\n",
    "    # fast = cv2.FastFeatureDetector_create(nonmaxSuppression=False)\n",
    "    fast = cv2.FastFeatureDetector_create(\n",
    "        threshold=40,\n",
    "        nonmaxSuppression=False,\n",
    "        type=cv2.FAST_FEATURE_DETECTOR_TYPE_9_16)\n",
    "    kp = fast.detect(image, None)\n",
    "    # print(\"Total Keypoints without nonmaxSuppression 2: {}\".format(len(kp)))\n",
    "    img4 = cv2.drawKeypoints(image, kp, None, color=(255, 0, 0))\n",
    "    cv2.imshow('origin', image)\n",
    "    cv2.imshow('fast_result', img4)\n",
    "    cv2.imwrite(\"fast-\" + image_name + \".jpg\",img4)\n",
    "    cv2.waitKey()\n",
    "    cv2.destroyAllWindows()\n",
    "\n",
    "\n",
    "filename = 'chessboard.png'\n",
    "image = cv2.imread(filename, 0)\n",
    "fast_detect(image, 20, \"chessboard\")\n",
    "\n",
    "filename = 'view.png'\n",
    "image = cv2.imread(filename, 0)\n",
    "fast_detect(image, 50, \"view\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "def sift_detect(image, scale, image_name):\n",
    "    scale_percent = scale  # percent of original size\n",
    "    width = int(image.shape[1] * scale_percent / 100)\n",
    "    height = int(image.shape[0] * scale_percent / 100)\n",
    "    dim = (width, height)\n",
    "    # resize image\n",
    "    image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)\n",
    "    # fast = cv2.FastFeatureDetector_create(nonmaxSuppression=False)\n",
    "    sift = cv2.xfeatures2d.SIFT_create()\n",
    "    keypoints, descriptor = sift.detectAndCompute(image, None)\n",
    "    image_result = cv2.drawKeypoints(image=image,\n",
    "                                     outImage=image,\n",
    "                                     keypoints=keypoints,\n",
    "                                     flags=cv2.DRAW_MATCHES_FLAGS_DEFAULT,\n",
    "                                     color=(255, 0, 0))\n",
    "    cv2.imshow('origin', image)\n",
    "    cv2.imshow('fast_result', image_result)\n",
    "    cv2.imwrite( \"sift-\" + image_name + \".jpg\", image_result)\n",
    "    cv2.waitKey()\n",
    "    cv2.destroyAllWindows()\n",
    "\n",
    "\n",
    "filename = 'chessboard.png'\n",
    "image = cv2.imread(filename, 0)\n",
    "sift_detect(image, 20, \"chessboard\")\n",
    "\n",
    "filename = 'view.png'\n",
    "image = cv2.imread(filename, 0)\n",
    "sift_detect(image, 50, \"view\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "\n",
    "def orb_detect(image, scale, image_name):\n",
    "    scale_percent = scale  # percent of original size\n",
    "    width = int(image.shape[1] * scale_percent / 100)\n",
    "    height = int(image.shape[0] * scale_percent / 100)\n",
    "    dim = (width, height)\n",
    "    # resize image\n",
    "    image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)\n",
    "    orb = cv2.ORB_create()\n",
    "    keypoints, descriptor = orb.detectAndCompute(image, None)\n",
    "    image_result = cv2.drawKeypoints(image=image,\n",
    "                                     outImage=image,\n",
    "                                     keypoints=keypoints,\n",
    "                                     flags=cv2.DRAW_MATCHES_FLAGS_DEFAULT,\n",
    "                                     color=(255, 0, 0))\n",
    "    cv2.imshow('origin', image)\n",
    "    cv2.imshow('fast_result', image_result)\n",
    "    cv2.imwrite( \"orb-\" + image_name + \".jpg\", image_result)\n",
    "    cv2.waitKey()\n",
    "    cv2.destroyAllWindows()\n",
    "\n",
    "filename = 'chessboard.png'\n",
    "image = cv2.imread(filename, 0)\n",
    "orb_detect(image, 20, \"chessboard\")\n",
    "\n",
    "filename = 'view.png'\n",
    "image = cv2.imread(filename, 0)\n",
    "orb_detect(image, 50, \"view\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "def harris_detect(image, scale, image_name):\n",
    "    scale_percent = scale  # percent of original size\n",
    "    width = int(image.shape[1] * scale_percent / 100)\n",
    "    height = int(image.shape[0] * scale_percent / 100)\n",
    "    dim = (width, height)\n",
    "    # resize image\n",
    "    image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)\n",
    "    image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n",
    "    image_float = np.float32(image_gray)\n",
    "    image_result = cv2.cornerHarris(image_float, 2, 3, 0.04)\n",
    "    image_cp = cv2.cvtColor(image_gray, cv2.COLOR_GRAY2BGR)\n",
    "    image_cp[image_result > 0.01 * image_result.max()] = [255, 0, 0]\n",
    "    cv2.imshow('origin', image_cp)\n",
    "#     cv2.imshow('harris_result', image_cp)\n",
    "    cv2.imwrite(\"harris-\" + image_name + \".jpg\", image_cp)\n",
    "    cv2.waitKey()\n",
    "    cv2.destroyAllWindows()\n",
    "\n",
    "\n",
    "filename = 'chessboard.png'\n",
    "image = cv2.imread(filename)\n",
    "harris_detect(image, 20, \"chessboard\")\n",
    "\n",
    "filename = 'view.png'\n",
    "image = cv2.imread(filename)\n",
    "harris_detect(image, 50, \"view\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 棋盘格图片：fast、orb算法检测效果较好，sift算法存在误检的中心点。harris算法我这只输出超过0.01* max的，效果不太明显\n",
    "- 风景图片：该风景图片对比度不如棋盘格高。误检点sift>fast，误检区域主要分布在山上的树林、雪山、雾气氤氲的地方。orb能较好的检测出船、山脊、雪山的角点，harris算法缺失了一些雾气氤氲的角点"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
