{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "92448b5a",
   "metadata": {},
   "source": [
    "# Simulation of production line with defects - Dataset creation and Inference\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "6037882b",
   "metadata": {},
   "source": [
    "_This notebook is originally created by [@paularamos](https://github.com/paularamo) for CVPR-2022 Tutorial [How to get quick and performant model for your edge application. From data to application](https://paularamo.github.io/cvpr-2022/)_\n",
    "\n",
    "### Definitions\n",
    "\n",
    "[Anomalib](https://github.com/openvinotoolkit/anomalib): Anomalib is a deep learning library that aims to collect state-of-the-art anomaly detection algorithms for benchmarking on both public and private datasets. Anomalib provides several ready-to-use implementations of anomaly detection algorithms described in the recent literature, as well as a set of tools that facilitate the development and implementation of custom models. The library has a strong focus on image-based anomaly detection, where the goal of the algorithm is to identify anomalous images, or anomalous pixel regions within images in a dataset.\n",
    "\n",
    "[Dobot](https://en.dobot.cn/products/education/magician.html) The Magician is an education robot arm portable and capable to run various automation tasks. With an interface in C++ and python we can control the robot using this notebook.\n",
    "\n",
    "> NOTE:\n",
    "> If you don't have the robot you can replace it by your custom problem.\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "ecfdde70",
   "metadata": {},
   "source": [
    "### Use case\n",
    "\n",
    "Using the [Dobot Magician](https://www.dobot.cc/dobot-magician/product-overview.html) we could simulate a production line system. Imagine we have a cubes factory and they need to know when a defect piece appear in the process. We know very well what the aspect ratio of the normal cubes is. Defects are coming no often and we need to put those defect cubes out of the production line.\n",
    "\n",
    "<img src=\"https://user-images.githubusercontent.com/10940214/174126337-b344bbdc-6343-4d85-93e8-0cb1bf39a4e3.png\" alt=\"drawing\" style=\"width:400px;\"/>\n",
    "\n",
    "| Class    | Yellow cube                                                                                                                                           | Red cube                                                                                                                                              | Green cube                                                                                                                                            | Inferencing using Anomalib                                                                                                                            |\n",
    "| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |\n",
    "| Normal   | <img src=\"https://user-images.githubusercontent.com/10940214/174083561-38eec918-efc2-4ceb-99b1-bbb4c91396b2.jpg\" alt=\"drawing\" style=\"width:150px;\"/> | <img src=\"https://user-images.githubusercontent.com/10940214/174083638-85ff889c-6222-4428-9c7d-9ad62bd15afe.jpg\" alt=\"drawing\" style=\"width:150px;\"/> | <img src=\"https://user-images.githubusercontent.com/10940214/174083707-364177d4-373b-4891-96ce-3e5ea923e440.jpg\" alt=\"drawing\" style=\"width:150px;\"/> | <img src=\"https://user-images.githubusercontent.com/10940214/174129305-03d9b71c-dfd9-492f-b42e-01c5c24171cc.jpg\" alt=\"drawing\" style=\"width:150px;\"/> |\n",
    "| Abnormal | <img src=\"https://user-images.githubusercontent.com/10940214/174083805-df0a0b03-58c7-4ba8-af50-fd94d3a13e58.jpg\" alt=\"drawing\" style=\"width:150px;\"/> | <img src=\"https://user-images.githubusercontent.com/10940214/174083873-22699523-22b4-4a55-a3da-6520095af8af.jpg\" alt=\"drawing\" style=\"width:150px;\"/> | <img src=\"https://user-images.githubusercontent.com/10940214/174083944-38d5a6f4-f647-455b-ba4e-69482dfa3562.jpg\" alt=\"drawing\" style=\"width:150px;\"/> | <img src=\"https://user-images.githubusercontent.com/10940214/174129253-f7a567d0-84f7-4050-8065-f00ba8bb973d.jpg\" alt=\"drawing\" style=\"width:150px;\"/> |\n",
    "\n",
    "Using Anomalib we are expecting to see this result.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dc20e36d",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"501b_inference_with_a_robotic_arm.ipynb.\"\"\"\n",
    "\n",
    "# Anomalib imports\n",
    "from __future__ import annotations\n",
    "\n",
    "import sys\n",
    "import time  # time library\n",
    "from datetime import datetime\n",
    "from pathlib import Path\n",
    "from threading import Thread\n",
    "from typing import TYPE_CHECKING\n",
    "\n",
    "if TYPE_CHECKING:\n",
    "    import numpy as np\n",
    "\n",
    "# importing required libraries\n",
    "import cv2  # OpenCV library\n",
    "\n",
    "from anomalib.deploy import OpenVINOInferencer"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "ebd2ea39",
   "metadata": {},
   "source": [
    "### Helper funtions\n",
    "\n",
    "Here you will find funtions to create filenames, capture images, run the inference and read the confidence of the detection.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dda6703d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Prepare the path to the datasets and the weights\n",
    "dataset_path = Path.cwd() / \"cubes\"\n",
    "weights_path = Path.cwd() / \"weights\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "78aa3bce",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def create_filename(path: Path) -> str:\n",
    "    \"\"\"Create the filename for new data(images).\n",
    "\n",
    "    Args:\n",
    "        path (Path): Initial path to save new images and results.\n",
    "\n",
    "    Returns:\n",
    "        str: Captured image filename\n",
    "    \"\"\"\n",
    "    path.mkdir(exist_ok=True, parents=True)\n",
    "    now = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n",
    "    return str(path / f\"input_{now}.jpg\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "8c6c7eab",
   "metadata": {},
   "source": [
    "### Prepare the mode (acquisition or inference mode) and define the work directory\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0af9abcb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Acquisition mode\n",
    "acquisition = False  # True False\n",
    "source = 0  # number of the camera you want to use\n",
    "folder = \"abnormal\"  # normal or abnormal\n",
    "\n",
    "# If acquisition is False this notebook will work in inference mode\n",
    "if acquisition is False:\n",
    "    # If you are running inference check where the OpenVINO model is stored\n",
    "    openvino_model_path = weights_path / \"openvino\" / \"model.bin\"\n",
    "    metadata_path = weights_path / \"openvino\" / \"metadata.json\"\n",
    "\n",
    "    print(\"OpenVINO model exist: \", openvino_model_path.exists())\n",
    "    print(\"OpenVINO path: \", openvino_model_path)\n",
    "    print(\"Metadata model exist: \", metadata_path.exists())\n",
    "    print(\"Metadata path: \", metadata_path)\n",
    "\n",
    "    inferencer = OpenVINOInferencer(\n",
    "        path=openvino_model_path,  # Path to the OpenVINO IR model.\n",
    "        metadata=metadata_path,  # Path to the metadata file.\n",
    "        device=\"CPU\",  # We would like to run it on an Intel CPU.\n",
    "    )\n",
    "\n",
    "    if dataset_path.exists() is False:\n",
    "        print(\"Make sure you have the dataset in a proper folder or it i already created\")\n",
    "else:\n",
    "    dataset_path.mkdir(parents=True, exist_ok=True)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "43968a85",
   "metadata": {},
   "source": [
    "### Helper class for implementing multi-threading\n",
    "\n",
    "Using multi-threading we will open the video to auto-capture an image when the robot locates the cube in front of the camera.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8c1ec34c",
   "metadata": {},
   "outputs": [],
   "source": [
    "class CameraStream:\n",
    "    \"\"\"Read video stream from camera via multi-threading.\"\"\"\n",
    "\n",
    "    def __init__(self, stream_id: int = 0) -> None:\n",
    "        self.stream_id = stream_id\n",
    "\n",
    "        # opening video capture stream\n",
    "        self.video_capture = cv2.VideoCapture(self.stream_id)\n",
    "        if self.video_capture.isOpened() is False:\n",
    "            print(\"[Exiting]: Error accessing cam stream.\")\n",
    "            sys.exit(0)\n",
    "        fps_input_stream = int(self.video_capture.get(5))  # hardware fps\n",
    "        print(f\"FPS of input stream: {fps_input_stream}\")\n",
    "\n",
    "        # reading a single frame from vcap stream for initializing\n",
    "        self.grabbed, self.frame = self.video_capture.read()\n",
    "        if self.grabbed is False:\n",
    "            print(\"[Exiting] No more frames to read\")\n",
    "            sys.exit(0)\n",
    "        # self.stopped is initialized to False\n",
    "        self.stopped = True\n",
    "        # thread instantiation\n",
    "        self.thread = Thread(target=self.update, args=())\n",
    "        self.thread.daemon = True  # daemon threads run in background\n",
    "\n",
    "    def start(self) -> None:\n",
    "        \"\"\"Start thread.\"\"\"\n",
    "        self.stopped = False\n",
    "        self.thread.start()\n",
    "\n",
    "    def update(self) -> None:\n",
    "        \"\"\"Update the next available frame.\"\"\"\n",
    "        while True:\n",
    "            if self.stopped is True:\n",
    "                break\n",
    "            self.grabbed, self.frame = self.video_capture.read()\n",
    "            if self.grabbed is False:\n",
    "                print(\"[Exiting] No more frames to read\")\n",
    "                self.stopped = True\n",
    "                break\n",
    "        self.video_capture.release()\n",
    "\n",
    "    def read(self) -> np.ndarray:\n",
    "        \"\"\"Read the next frame.\"\"\"\n",
    "        return self.frame\n",
    "\n",
    "    def stop(self) -> None:\n",
    "        \"\"\"Stop reading frames.\"\"\"\n",
    "        self.stopped = True"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "3f90a588",
   "metadata": {},
   "source": [
    "### Function to visualize the prediction\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "62854abc",
   "metadata": {},
   "source": [
    "### Using a webcam or a USB camera for running the inference\n",
    "\n",
    "Connect and identify your USB camera, we will use a a video player to embed the video in this notebook.\n",
    "\n",
    "We will now work with the robot, and the driver must remain in the same 501 folder. Please move the files from `./501_dobot/dobot_api` to `./501_dobot` by copying and pasting. Ascertain that you run the notebook [501a](https://github.com/openvinotoolkit/anomalib/blob/main/notebooks/500_use_cases/501_dobot/501a_training_a_model_with_cubes_from_a_robotic_arm.ipynb) if the `dobot_api` folder hasn't been created.\n",
    "\n",
    "> NOTE:\n",
    "> If you don't have the robot you can replace it by your custom problem. See the comments below.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a63a5943",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Dobot/general imports\n",
    "# pylint: disable=wrong-import-order\n",
    "import DobotDllType as dType\n",
    "\n",
    "CON_STR = {\n",
    "    dType.DobotConnect.DobotConnect_NoError: \"DobotConnect_NoError\",\n",
    "    dType.DobotConnect.DobotConnect_NotFound: \"DobotConnect_NotFound\",\n",
    "    dType.DobotConnect.DobotConnect_Occupied: \"DobotConnect_Occupied\",\n",
    "}\n",
    "\n",
    "# Load Dll and get the CDLL object\n",
    "api = dType.load()\n",
    "\n",
    "# Connect Dobot\n",
    "state = dType.ConnectDobot(api, \"\", 115200)[0]\n",
    "print(\"Connect status:\", CON_STR[state])\n",
    "\n",
    "use_popup = True  # True\n",
    "\n",
    "if state == dType.DobotConnect.DobotConnect_NoError:\n",
    "    print(\n",
    "        \"[HOME] Restore to home position at first launch, please wait 30 seconds after turnning on the Dobot Magician.\",\n",
    "    )\n",
    "    print(\n",
    "        \"[BLOCKS] Place them besides the non-motor side of the conveyor belt,\"\n",
    "        \" the same side where the pick and place arm is.\",\n",
    "    )\n",
    "    print(\"[PLACING BLOCKS] Place the blocks by 3x3.\")\n",
    "    print(\"[CALIBRATION POINT] Looking from the back of Dobot, the top left block is the calibration point.\")\n",
    "    print(\"[CALIBRATION] Set the first variable to 0 to test the calibration point, then set 1 to start running.\")\n",
    "    print(\n",
    "        \"[DIRECTION] Standing behind Dobot Magician facing its front direction, X is front and back direction, \"\n",
    "        \"Y is left and right direction. \",\n",
    "    )\n",
    "    print(\"[CONNECTION] Motor of the conveyor belt connects to port Stepper1.\")\n",
    "\n",
    "    Calibration__0__Run__1 = 1\n",
    "    Calibration_X = 221.2288\n",
    "    Calibration_Y = -117.0036\n",
    "    Calibration_Z = -42.3512\n",
    "    Place_X = 23.7489  # 42.2995 #\n",
    "    Place_Y = -264.2602  # -264.6927 #\n",
    "    Place_Z = 18.0862  # 63.65 #\n",
    "    Anomaly_X = -112  # -84.287 #\n",
    "    Anomaly_Y = -170  # -170.454 #\n",
    "    Anomaly_Z = 90  # 61.5359 #\n",
    "    dType.SetEndEffectorParamsEx(api, 59.7, 0, 0, 1)\n",
    "    j = 0\n",
    "    k = 0\n",
    "    dType.SetPTPJointParamsEx(api, 400, 400, 400, 400, 400, 400, 400, 400, 1)\n",
    "    dType.SetPTPCommonParamsEx(api, 100, 100, 1)\n",
    "    dType.SetPTPJumpParamsEx(api, 40, 100, 1)\n",
    "    dType.SetPTPCmdEx(api, 0, Calibration_X, Calibration_Y, Calibration_Z, 0, 1)\n",
    "    dType.SetEndEffectorSuctionCupEx(api, 0, 1)\n",
    "    STEP_PER_CRICLE = 360.0 / 1.8 * 10.0 * 16.0\n",
    "    MM_PER_CRICLE = 3.1415926535898 * 36.0\n",
    "    vel = float(0) * STEP_PER_CRICLE / MM_PER_CRICLE\n",
    "    dType.SetEMotorEx(api, 1, 0, int(vel), 1)\n",
    "\n",
    "    if Calibration__0__Run__1:\n",
    "        for _ in range(9):\n",
    "            # initializing and starting multi-threaded webcam input stream\n",
    "            cam_stream = CameraStream(stream_id=0)  # 0 id for main camera\n",
    "            cam_stream.start()\n",
    "\n",
    "            dType.SetPTPCmdEx(api, 0, (Calibration_X - j), (Calibration_Y - k), (Calibration_Z - 10), 0, 1)\n",
    "            dType.SetEndEffectorSuctionCupEx(api, 1, 1)\n",
    "            dType.SetPTPCmdEx(api, 0, (Place_X - 0), (Place_Y - 0), (Place_Z + 90), 0, 1)\n",
    "\n",
    "            # adding a delay for simulating video processing time\n",
    "            delay = 0.3  # delay value in seconds\n",
    "            time.sleep(delay)\n",
    "            # Capture a frame from the video player - start thread\n",
    "            frame = cam_stream.read()\n",
    "\n",
    "            if acquisition:\n",
    "                # create filename to next frame\n",
    "                filename = create_filename(path=(dataset_path / folder))\n",
    "                cv2.imwrite(filename, frame)\n",
    "                dType.SetPTPCmdEx(api, 0, Place_X, Place_Y, Place_Z, 0, 1)\n",
    "\n",
    "            else:\n",
    "                # Get the inference results.\n",
    "                frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n",
    "                # INFERENCE WITH OPENVINO\n",
    "                predictions = inferencer.predict(image=frame)\n",
    "                print(predictions.pred_score)\n",
    "                if predictions.pred_score > 0.48:  # modify the threshold depending of your needs\n",
    "                    dType.SetPTPCmdEx(api, 0, Anomaly_X, Anomaly_Y, Anomaly_Z, 0, 1)  # define point for abnormalities\n",
    "                else:\n",
    "                    dType.SetPTPCmdEx(api, 0, Place_X, Place_Y, Place_Z, 0, 1)\n",
    "\n",
    "            dType.SetEndEffectorSuctionCupEx(api, 0, 1)\n",
    "            j = j + 25\n",
    "            if j == 75:\n",
    "                k = k + 25\n",
    "                j = 0\n",
    "            dType.SetPTPCmdEx(api, 7, 0, 0, 20, 0, 1)\n",
    "            time_start = dType.gettime()[0]\n",
    "            STEP_PER_CRICLE = 360.0 / 1.8 * 10.0 * 16.0\n",
    "            MM_PER_CRICLE = 3.1415926535898 * 36.0\n",
    "            vel = float(50) * STEP_PER_CRICLE / MM_PER_CRICLE\n",
    "            dType.SetEMotorEx(api, 1, 1, int(vel), 1)\n",
    "            filename = None\n",
    "            score = 0\n",
    "            while True:\n",
    "                if (dType.gettime()[0]) - time_start >= 0.5:  # Time over conveyor belt\n",
    "                    STEP_PER_CRICLE = 360.0 / 1.8 * 10.0 * 16.0\n",
    "                    MM_PER_CRICLE = 3.1415926535898 * 36.0\n",
    "                    vel = float(0) * STEP_PER_CRICLE / MM_PER_CRICLE\n",
    "                    dType.SetEMotorEx(api, 1, 0, int(vel), 1)\n",
    "                    break\n",
    "        dType.SetEndEffectorSuctionCupEx(api, 0, 1)\n",
    "        dType.SetPTPCmdEx(api, 0, Calibration_X, Calibration_Y, Calibration_Z, 0, 1)\n",
    "        cam_stream.stop()  # stop the webcam stream"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "anomalib",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.11"
  },
  "vscode": {
   "interpreter": {
    "hash": "ae223df28f60859a2f400fae8b3a1034248e0a469f5599fd9a89c32908ed7a84"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
