{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 目标巡线\n",
    "\n",
    "如果您已经完成了避免碰撞示例，那么您应该熟悉以下三个步骤\n",
    "\n",
    "1.数据收集\n",
    "2.培训\n",
    "3.部署\n",
    "\n",
    "在本示例中，我们将做同样的事情！除了分类，您将学习另一种基本技术**回归**，我们将使用它来\n",
    "使JetBot能够沿着道路（或者实际上是任何路径或目标点）行驶。\n",
    "\n",
    "1.将JetBot放置在路径上的不同位置（偏离中心，不同角度等）\n",
    "\n",
    ">请记住，避免碰撞是数据变化的关键！\n",
    "\n",
    "2.显示来自机器人的实时摄像头\n",
    "3.使用游戏手柄控制器，在图像上放置一个“绿点”，该绿点对应于我们希望机器人行进的目标方向。\n",
    "4.存储该绿点的X，Y值以及机器人摄像机的图像\n",
    "\n",
    "然后，在训练笔记本中，我们将训练神经网络来预测标签的X，Y值。在现场演示中，我们将使用\n",
    "预测的X，Y值以计算近似的转向值（不是“完全”一个角度，例如\n",
    "这需要进行图像校准，但它与角度大致成比例，因此我们的控制器可以正常工作）。\n",
    "\n",
    "那么，如何确定本示例中目标的确切位置？这是我们认为可能会有所帮助的指南\n",
    "\n",
    "1.查看来自摄像机的实时视频\n",
    "2.想象一下机器人应该遵循的路径（尝试估算出避免逃跑等所需的距离）。\n",
    "3.将目标尽可能沿此路径放置，以使机器人可以直奔目标，而不会“跑开”道路。\n",
    "\n",
    ">例如，如果我们在非常平坦的道路上，则可以将其放置在地平线上。如果我们急转弯，则可能需要将其放置在离机器人更近的位置，以确保它不会超出边界。\n",
    "\n",
    "假设我们的深度学习模型按预期工作，则这些标签准则应确保以下各项：\n",
    "\n",
    "1.机器人可以安全地直接朝目标行进（不会越界等）\n",
    "2.目标将继续沿着我们想象的道路前进\n",
    "\n",
    "我们得到的是沿着我们期望的轨迹移动的“一根棍子上的胡萝卜”。深度学习决定了将胡萝卜放在哪里，JetBot紧随其后:)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 示例视频\n",
    "\n",
    "执行代码块以查看如何标记图像的示例。 该模型仅需123张图像即可工作:)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import HTML\n",
    "HTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/FW4En6LejhI\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 导入库"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "因此，让我们开始导入用于“数据收集”目的的所有必需库。 我们将主要使用OpenCV来可视化并保存带有标签的图像。 uuid，datetime之类的库用于图像命名。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# IPython Libraries for display and widgets\n",
    "import traitlets\n",
    "import ipywidgets.widgets as widgets\n",
    "from IPython.display import display\n",
    "\n",
    "# Camera and Motor Interface for JetBot\n",
    "from jetbot import Robot, Camera, bgr8_to_jpeg\n",
    "\n",
    "# Python basic pakcages for image annotation\n",
    "from uuid import uuid1\n",
    "import os\n",
    "import json\n",
    "import glob\n",
    "import datetime\n",
    "import numpy as np\n",
    "import cv2\n",
    "import time"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 显示摄像机视频流"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先，让我们像在遥控笔记本中一样初始化并显示我们的相机。\n",
    "\n",
    "我们使用JetBot的Camera Class来启用CSI MIPI摄像机。 我们的神经网络将224x224像素的图像作为输入。 我们将相机设置为该大小，以最大程度地减少数据集的文件大小（我们已经测试了它适用于此任务）。 在某些情况下，最好以较大的图像大小收集数据，然后再缩小为所需大小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "camera = Camera()\n",
    "\n",
    "image_widget = widgets.Image(format='jpeg', width=224, height=224)\n",
    "target_widget = widgets.Image(format='jpeg', width=224, height=224)\n",
    "\n",
    "x_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='x')\n",
    "y_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='y')\n",
    "\n",
    "def display_xy(camera_image):\n",
    "    image = np.copy(camera_image)\n",
    "    x = x_slider.value\n",
    "    y = y_slider.value\n",
    "    x = int(x * 224 / 2 + 112)\n",
    "    y = int(y * 224 / 2 + 112)\n",
    "    image = cv2.circle(image, (x, y), 8, (0, 255, 0), 3)\n",
    "    image = cv2.circle(image, (112, 224), 8, (0, 0,255), 3)\n",
    "    image = cv2.line(image, (x,y), (112,224), (255,0,0), 3)\n",
    "    jpeg_image = bgr8_to_jpeg(image)\n",
    "    return jpeg_image\n",
    "\n",
    "time.sleep(1)\n",
    "traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)\n",
    "traitlets.dlink((camera, 'value'), (target_widget, 'value'), transform=display_xy)\n",
    "\n",
    "display(widgets.HBox([image_widget, target_widget]), x_slider, y_slider)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 创建游戏手柄控制器\n",
    "\n",
    "此步骤类似于“远程操作”任务。 在此任务中，我们将使用游戏手柄控制器标记图像。\n",
    "\n",
    "我们要做的第一件事是创建Controller小部件的实例，我们将使用它在引言中为图像标记“ x”和“ y”值。 控制器小部件带有一个索引参数，该参数指定控制器的编号。 如果您连接了多个控制器，或者某些游戏手柄显示为多个控制器，这很有用。 要确定您正在使用的控制器的索引，\n",
    "\n",
    "访问http://html5gamepad.com。\n",
    "在您使用的游戏板上按按钮\n",
    "记住响应按钮按下的游戏手柄索引\n",
    "接下来，我们将使用该索引创建并显示我们的控制器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "controller = widgets.Controller(index=0)\n",
    "\n",
    "display(controller)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 连接手柄控制器->标签图像\n",
    "\n",
    "现在，即使我们已经连接了游戏手柄，我们仍未将控制器附加到标签图像上！ 我们将使用dlink函数将其连接到左右垂直轴。 与链接功能不同，dlink函数允许我们在源和目标之间附加转换。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "widgets.jsdlink((controller.axes[2], 'value'), (x_slider, 'value'))\n",
    "widgets.jsdlink((controller.axes[3], 'value'), (y_slider, 'value'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 收集数据\n",
    "\n",
    "以下代码块将显示实时图像提要，以及我们已保存的图像数。 我们存储\n",
    "目标X，Y值\n",
    "\n",
    "1.将绿点放在目标上\n",
    "2.在DPAD上按“向下”保存\n",
    "\n",
    "这会将文件存储在dataset_xy文件夹中，文件名为\n",
    "\n",
    "``xy_ <x值> _ <y值> _ <uuid> .jpg''\n",
    "\n",
    "训练时，我们加载图像并解析文件名中的x，y值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "DATASET_DIR = 'dataset_xy'\n",
    "\n",
    "# we have this \"try/except\" statement because these next functions can throw an error if the directories exist already\n",
    "try:\n",
    "    os.makedirs(DATASET_DIR)\n",
    "except FileExistsError:\n",
    "    print('Directories not created becasue they already exist')\n",
    "\n",
    "for b in controller.buttons:\n",
    "    b.unobserve_all()\n",
    "\n",
    "count_widget = widgets.IntText(description='count', value=len(glob.glob(os.path.join(DATASET_DIR, '*.jpg'))))\n",
    "\n",
    "def xy_uuid(x, y):\n",
    "    return 'xy_%03d_%03d_%s' % (x * 50 + 50, y * 50 + 50, uuid1())\n",
    "\n",
    "def save_snapshot(change):\n",
    "    if change['new']:\n",
    "        uuid = xy_uuid(x_slider.value, y_slider.value)\n",
    "        image_path = os.path.join(DATASET_DIR, uuid + '.jpg')\n",
    "        with open(image_path, 'wb') as f:\n",
    "            f.write(image_widget.value)\n",
    "        count_widget.value = len(glob.glob(os.path.join(DATASET_DIR, '*.jpg')))\n",
    "\n",
    "controller.buttons[13].observe(save_snapshot, names='value')\n",
    "\n",
    "display(widgets.VBox([\n",
    "    target_widget,\n",
    "    count_widget\n",
    "]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 接下来"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "收集到足够的数据后，我们需要将该数据复制到我们的GPU台式机或云机上进行培训。 首先，我们可以调用以下终端命令将数据集文件夹压缩为单个zip文件。\n",
    "\n",
    ">如果您正在使用JetBot进行培训，则可以跳过此步骤！"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "！ 前缀表示我们要将单元作为外壳（或终端）命令运行。\n",
    "\n",
    "下面zip命令中的-r标志指示递归，以便我们包括所有嵌套文件，-q标志指示安静，以便zip命令不输出任何输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def timestr():\n",
    "    return str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))\n",
    "\n",
    "!zip -r -q road_following_{DATASET_DIR}_{timestr()}.zip {DATASET_DIR}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "您应该在Jupyter Lab文件浏览器中看到一个名为road_following_ <日期和时间> .zip的文件。 您应该使用Jupyter Lab文件浏览器通过右键单击并选择下载来下载zip文件。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
