{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Week3 进阶作业\n",
    "\n",
    "> **问题描述**\n",
    "（本周共计3个作业） "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 在测试视频(OpenCV安装目录\\sources\\samples\\data)上，使用基于混合高斯模型的背景提取算法，提取前景并显示(显示二值化图像，前景为白色)。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import cv2\n",
    "import numpy as np\n",
    "\n",
    "cap=cv2.VideoCapture('vtest.avi')\n",
    "kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))\n",
    "fgbg=cv2.createBackgroundSubtractorMOG2()# 实例化混合高斯模型\n",
    "#fgbg=cv2.createBackgroundSubtractorKNN(detectShadows=True)\n",
    "count=0\n",
    "\n",
    "while(True):\n",
    "    ret, frame=cap.read()\n",
    "    fgmask = fgbg.apply(frame)\n",
    "    fgmask = cv2.morphologyEx(fgmask,cv2.MORPH_OPEN,kernel)# 开运算，去噪点。\n",
    "    contours, _ = cv2.findContours(fgmask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)# 在二值图像上检测物体轮廓\n",
    "    for c in contours:\n",
    "        Area=cv2.contourArea(c)# 面积\n",
    "        if Area<300:# 过滤操作\n",
    "            continue\n",
    "        count+=1\n",
    "#         print(\"{}-prospect:{}\".format(count,Area),end=\"  \")\n",
    "        x,y,w,h=cv2.boundingRect(c)\n",
    "#         print(\"x:{} y:{}\".format(x,y))\n",
    "\n",
    "        cv2.rectangle(frame, (x,y), (x+w,y+h), (0,255,0), 2)\n",
    "        cv2.putText(frame, str(count), (x,y), cv2.FONT_HERSHEY_COMPLEX, 0.4, (0,225,0), 1)# 在前景框上标上编号\n",
    "    cv2.putText(frame, \"count:\", (5, 20), cv2.FONT_HERSHEY_COMPLEX, 0.6, (0, 255, 0), 1) #显示总数\n",
    "    cv2.putText(frame, str(count), (75, 20), cv2.FONT_HERSHEY_COMPLEX, 0.6, (0, 255, 0), 1)\n",
    "#     print(\"----------------------------\")\n",
    "\n",
    "        #cv2.getBackgroundImage\n",
    "    cv2.resizeWindow(\"frame\", 700, 500 ) \n",
    "    cv2.resizeWindow(\"fgmask\", 700, 500 )\n",
    "    cv2.imshow('frame',frame)# 显示每一帧图像\n",
    "    cv2.imshow('fgmask', fgmask)# 显示每一帧图像对应的前景提取结果\n",
    "    key = cv2.waitKey(150) & 0xff# 控制视频播放速度\n",
    "    if key == 27:\n",
    "        break\n",
    "\n",
    "cap.release()\n",
    "cv2.destroyAllWindows()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"week3_1.png\" width=\"80%\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 在1基础上，将前景目标进行分割，进一步使用不同颜色矩形框标记，并在命令行窗口中输出每个矩形框的位置和大小。 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "命令行部分输出如下所示：\n",
    "<img src=\"week3_2.png\" width=\"80%\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 安装ImageWatch，并在代码中通过设置断点，观察处理中间结果图像。 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用python学习，无法安装ImageWatch."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 使用光流估计方法，在前述测试视频上计算特征点，进一步进行特征点光流估计。 (扩展作业)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(38, 1, 2)\n"
     ]
    }
   ],
   "source": [
    "import cv2\n",
    "import numpy as np\n",
    "\n",
    "cap=cv2.VideoCapture('vtest.avi')\n",
    "# 角点检测所需参数,在进行光流估计计算之前，需要先进行角点检测，然后再把检测出的特征点进行光流估计。\n",
    "feature_params = dict( maxCorners = 100,\n",
    "                       qualityLevel = 0.3,\n",
    "                       minDistance = 7)\n",
    "# lucas kanade参数\n",
    "lk_params = dict( winSize  = (15,15),\n",
    "                  maxLevel = 2)\n",
    "# 随机颜色条\n",
    "color = np.random.randint(0,255,(100,3))\n",
    "# 拿到第一帧图像，这里没有循环。\n",
    "ret,old_frame=cap.read()\n",
    "old_gray=cv2.cvtColor(old_frame,cv2.COLOR_BGR2GRAY)\n",
    "# 返回所有检测特征点，第一个参数输入图像，maxCorners：角点最大数量（效率），qualityLevel：品质因子（特征值越大的越好，来筛选）\n",
    "# minDistance：距离，相当于这区间有比这个角点强的，就不要这个弱的了。\n",
    "P0=cv2.goodFeaturesToTrack(old_gray,mask=None,**feature_params)# 获取图像中最好的角点特征,None表示在整幅图上寻找角点。\n",
    "print(P0.shape)# \n",
    "# 创建一个mask\n",
    "mask=np.zeros_like(old_frame)\n",
    "\n",
    "while (True):\n",
    "    ret,frame=cap.read()\n",
    "    frame_gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)\n",
    "    # 需要传入前一帧和当前图像以及前一帧检测到的角点\n",
    "    p1, st, err=cv2.calcOpticalFlowPyrLK(old_gray,frame_gray,P0,None,**lk_params)\n",
    "    good_new=p1[st==1]#st=1当前帧图像检测到了上一阵的角点特征,shape为（38,2）,2表示坐标值\n",
    "    good_old=P0[st==1]\n",
    "    # 绘制轨迹\n",
    "    for i,(new,old)in enumerate (zip(good_new,good_old)):\n",
    "        a,b = new.ravel()# 新坐标\n",
    "        c,d = old.ravel()# 旧坐标\n",
    "        mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)# 直线的起点（a，b），直线的终点坐标（c，d）\n",
    "        frame = cv2.circle(frame,(a,b),5,color[i].tolist(),-1)\n",
    "    img=cv2.add(frame,mask)\n",
    "    \n",
    "    cv2.imshow('frame',img)\n",
    "    key = cv2.waitKey(150) & 0xff\n",
    "    if key == 27:\n",
    "        break\n",
    "\n",
    "    old_gray = frame_gray.copy()\n",
    "    P0 = good_new.reshape(-1,1,2)\n",
    "\n",
    "cv2.destroyAllWindows()\n",
    "cap.release()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<!-- <img src=\"week3_4.png\" width=\"80%\"> -->\n",
    "![]"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
