{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8347485f",
   "metadata": {},
   "source": [
    "# Deep Dive Reverse Video Search\n",
    "\n",
    "In the [previous tutorial](./1_reverse_video_search_engine.ipynb), we've learnt how to build a reverse video search engine. Now let's make the solution more feasible in production."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "569571ec",
   "metadata": {},
   "source": [
    "## Preparation\n",
    "\n",
    "Let's recall preparation steps first:\n",
    "1. Install packages\n",
    "2. Prepare data\n",
    "3. Start milvus\n",
    "\n",
    "### Install packages\n",
    "\n",
    "Make sure you have installed required python packages:\n",
    "\n",
    "| package |\n",
    "| -- |\n",
    "| towhee |\n",
    "| towhee.models |\n",
    "| pillow |\n",
    "| ipython |\n",
    "| fastapi |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "d2d8e3e7",
   "metadata": {},
   "outputs": [],
   "source": [
    "! python -m pip install -q towhee towhee.models"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11ef6b1a",
   "metadata": {},
   "source": [
    "### Prepare data\n",
    "\n",
    "This tutorial will use a small data extracted from [Kinetics400](https://www.deepmind.com/open-source/kinetics). You can download the subset from [Github](https://github.com/towhee-io/examples/releases/download/data/reverse_video_search.zip). \n",
    "\n",
    "The data is organized as follows:\n",
    "- **train:** candidate videos, 20 classes, 10 videos per class (200 in total)\n",
    "- **test:** query videos, same 20 classes as train data, 1 video per class (20 in total)\n",
    "- **reverse_video_search.csv:** a csv file containing an ***id***, ***path***, and ***label*** for each video in train data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "54568b1a",
   "metadata": {},
   "outputs": [],
   "source": [
    "! curl -L https://github.com/towhee-io/examples/releases/download/data/reverse_video_search.zip -O\n",
    "! unzip -q -o reverse_video_search.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2171f2e7",
   "metadata": {},
   "source": [
    "For later steps to easier get videos & measure results, we build some helpful functions in advance:\n",
    "- **ground_truth:** get ground-truth video ids for the query video by its path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "dd1b0ef0",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "\n",
    "df = pd.read_csv('./reverse_video_search.csv')\n",
    "\n",
    "id_video = df.set_index('id')['path'].to_dict()\n",
    "label_ids = {}\n",
    "for label in set(df['label']):\n",
    "    label_ids[label] = list(df[df['label']==label].id)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3e98f62",
   "metadata": {},
   "source": [
    "### Start Milvus\n",
    "\n",
    "Before getting started with the engine, we also need to get ready with Milvus. Please make sure that you have started a [Milvus service](https://milvus.io/docs/install_standalone-docker.md). This notebook uses [milvus 2.2.10](https://milvus.io/docs/v2.2.x/install_standalone-docker.md) and [pymilvus 2.2.11](https://milvus.io/docs/release_notes.md#2210)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "123fc72f",
   "metadata": {},
   "outputs": [],
   "source": [
    "! python -m pip install -q pymilvus==2.2.11"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7665543e",
   "metadata": {},
   "source": [
    "Here we prepare a function to work with a Milvus collection with the following parameters:\n",
    "- [L2 distance metric](https://milvus.io/docs/metric.md#Euclidean-distance-L2)\n",
    "- [IVF_FLAT index](https://milvus.io/docs/index.md#IVF_FLAT)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "f4fbffa1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pymilvus import connections, FieldSchema, CollectionSchema, DataType, Collection, utility\n",
    "\n",
    "connections.connect(host='127.0.0.1', port='19530')\n",
    "\n",
    "def create_milvus_collection(collection_name, dim):\n",
    "    if utility.has_collection(collection_name):\n",
    "        utility.drop_collection(collection_name)\n",
    "    \n",
    "    fields = [\n",
    "    FieldSchema(name='id', dtype=DataType.INT64, descrition='ids', is_primary=True, auto_id=False),\n",
    "    FieldSchema(name='embedding', dtype=DataType.FLOAT_VECTOR, descrition='embedding vectors', dim=dim)\n",
    "    ]\n",
    "    schema = CollectionSchema(fields=fields, description='deep dive reverse video search')\n",
    "    collection = Collection(name=collection_name, schema=schema)\n",
    "\n",
    "    # create IVF_FLAT index for collection.\n",
    "    index_params = {\n",
    "        'metric_type':'L2',\n",
    "        'index_type':\"IVF_FLAT\",\n",
    "        'params':{\"nlist\": 400}\n",
    "    }\n",
    "    collection.create_index(field_name=\"embedding\", index_params=index_params)\n",
    "    return collection"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "750d8e66",
   "metadata": {},
   "source": [
    "### Build Engine\n",
    "\n",
    "Now we are ready to build a reverse-video-search engine. Here we show an engine built with [`TimeSformer model`](https://towhee.io/action-classification/timesformer) and its performance to make comparasion later."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "86790dac",
   "metadata": {},
   "outputs": [],
   "source": [
    "def read_csv(csv_file):\n",
    "    import csv\n",
    "    with open(csv_file, 'r', encoding='utf-8-sig') as f:\n",
    "        data = csv.DictReader(f)\n",
    "        for line in data:\n",
    "            yield line['id'], line['path'], line['label']\n",
    "\n",
    "def ground_truth(path):\n",
    "    label = path.split('/')[-2]\n",
    "    return label_ids[label]\n",
    "\n",
    "def mean_hit_ratio(actual, predicted):\n",
    "    ratios = []\n",
    "    for act, pre in zip(actual, predicted):\n",
    "        hit_num = len(set(act) & set(pre))\n",
    "        ratios.append(hit_num / len(act))\n",
    "    return sum(ratios) / len(ratios)\n",
    "\n",
    "def mean_average_precision(actual, predicted):\n",
    "    aps = []\n",
    "    for act, pre in zip(actual, predicted):\n",
    "        precisions = []\n",
    "        hit = 0\n",
    "        for idx, i in enumerate(pre):\n",
    "            if i in act:\n",
    "                hit += 1\n",
    "            precisions.append(hit / (idx + 1))\n",
    "        aps.append(sum(precisions) / len(precisions))\n",
    "    \n",
    "    return sum(aps) / len(aps)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "d015dfaf",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table style=\"border-collapse: collapse;\"><tr><th style=\"text-align: center; font-size: 130%; border: none;\">mHR</th> <th style=\"text-align: center; font-size: 130%; border: none;\">mAP</th></tr> <tr><td style=\"text-align: center; vertical-align: center; border-right: solid 1px #D3D3D3; border-left: solid 1px #D3D3D3; \">0.715</td> <td style=\"text-align: center; vertical-align: center; border-right: solid 1px #D3D3D3; border-left: solid 1px #D3D3D3; \">0.7723293650793651</td></tr></table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import glob\n",
    "from towhee import pipe, ops\n",
    "from towhee.datacollection import DataCollection\n",
    "\n",
    "collection = create_milvus_collection('timesformer', 768)\n",
    "\n",
    "insert_pipe = (\n",
    "    pipe.input('csv_path')\n",
    "        .flat_map('csv_path', ('id', 'path', 'label'), read_csv)\n",
    "        .map('id', 'id', lambda x: int(x))\n",
    "        .map('path', 'frames', ops.video_decode.ffmpeg(sample_type='uniform_temporal_subsample', args={'num_samples': 8}))\n",
    "        .map('frames', ('labels', 'scores', 'features'), ops.action_classification.timesformer(skip_preprocess=True))\n",
    "        .map('features', 'features', ops.towhee.np_normalize())\n",
    "        .map(('id', 'features'), 'insert_res', ops.ann_insert.milvus_client(host='127.0.0.1', port='19530', collection_name='timesformer'))\n",
    "        .output()\n",
    ")\n",
    "\n",
    "insert_pipe('reverse_video_search.csv')\n",
    "\n",
    "collection.load()\n",
    "eval_pipe = (\n",
    "    pipe.input('path')\n",
    "        .flat_map('path', 'path', lambda x: glob.glob(x))\n",
    "        .map('path', 'frames', ops.video_decode.ffmpeg(sample_type='uniform_temporal_subsample', args={'num_samples': 8}))\n",
    "        .map('frames', ('labels', 'scores', 'features'), ops.action_classification.timesformer(skip_preprocess=True))\n",
    "        .map('features', 'features', ops.towhee.np_normalize())\n",
    "        .map('features', 'result', ops.ann_search.milvus_client(host='127.0.0.1', port='19530', collection_name='timesformer', limit=10))  \n",
    "        .map('result', 'predict', lambda x: [i[0] for i in x])\n",
    "        .map('path', 'ground_truth', ground_truth)\n",
    "        .window_all(('ground_truth', 'predict'), 'mHR', mean_hit_ratio)\n",
    "        .window_all(('ground_truth', 'predict'), 'mAP', mean_average_precision)\n",
    "        .output('mHR', 'mAP')\n",
    ")\n",
    "\n",
    "res = DataCollection(eval_pipe('./test/*/*.mp4'))\n",
    "res.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9d78601",
   "metadata": {},
   "source": [
    "## Dimensionality Reduction\n",
    "\n",
    "In production, memory consumption is always a major concern, which can by relieved by minimizing the embedding dimension. Random projection is a dimensionality reduction method for a set vectors in Euclidean space. Since this method is fast and requires no training, we'll try this technique and compare performance with TimeSformer model:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2474daba",
   "metadata": {},
   "source": [
    "First let's get a quick look at the engine performance without dimension reduction. The embedding dimension is 768."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dca23c3e",
   "metadata": {},
   "source": [
    "To reduce dimension, we can apply a projection matrix in proper size to each original embedding. We can just add an operator `.map('features', 'features', lambda x: np.dot(x, projection_matrix))` right after an video embedding is generated. Let's see how's the engine performance with embedding dimension down to 128."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "7343f885",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table style=\"border-collapse: collapse;\"><tr><th style=\"text-align: center; font-size: 130%; border: none;\">mHR</th> <th style=\"text-align: center; font-size: 130%; border: none;\">mAP</th></tr> <tr><td style=\"text-align: center; vertical-align: center; border-right: solid 1px #D3D3D3; border-left: solid 1px #D3D3D3; \">0.61</td> <td style=\"text-align: center; vertical-align: center; border-right: solid 1px #D3D3D3; border-left: solid 1px #D3D3D3; \">0.6778511904761905</td></tr></table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "projection_matrix = np.random.normal(scale=1.0, size=(768, 128))\n",
    "\n",
    "collection = create_milvus_collection('timesformer_128', 128)\n",
    "\n",
    "insert_pipe = (\n",
    "    pipe.input('csv_path')\n",
    "        .flat_map('csv_path', ('id', 'path', 'label'), read_csv)\n",
    "        .map('id', 'id', lambda x: int(x))\n",
    "        .map('path', 'frames', ops.video_decode.ffmpeg(sample_type='uniform_temporal_subsample', args={'num_samples': 8}))\n",
    "        .map('frames', ('labels', 'scores', 'features'), ops.action_classification.timesformer(skip_preprocess=True))\n",
    "        .map('features', 'features', lambda x: np.dot(x, projection_matrix))\n",
    "        .map('features', 'features', ops.towhee.np_normalize())\n",
    "        .map(('id', 'features'), 'insert_res', ops.ann_insert.milvus_client(host='127.0.0.1', port='19530', collection_name='timesformer_128'))\n",
    "        .output()\n",
    ")\n",
    "\n",
    "insert_pipe('reverse_video_search.csv')\n",
    "\n",
    "collection.load()\n",
    "eval_pipe = (\n",
    "    pipe.input('path')\n",
    "        .flat_map('path', 'path', lambda x: glob.glob(x))\n",
    "        .map('path', 'frames', ops.video_decode.ffmpeg(sample_type='uniform_temporal_subsample', args={'num_samples': 8}))\n",
    "        .map('frames', ('labels', 'scores', 'features'), ops.action_classification.timesformer(skip_preprocess=True))\n",
    "        .map('features', 'features', lambda x: np.dot(x, projection_matrix))\n",
    "        .map('features', 'features', ops.towhee.np_normalize())\n",
    "        .map('features', 'result', ops.ann_search.milvus_client(host='127.0.0.1', port='19530', collection_name='timesformer_128', limit=10))  \n",
    "        .map('result', 'predict', lambda x: [i[0] for i in x])\n",
    "        .map('path', 'ground_truth', ground_truth)\n",
    "        .window_all(('ground_truth', 'predict'), 'mHR', mean_hit_ratio)\n",
    "        .window_all(('ground_truth', 'predict'), 'mAP', mean_average_precision)\n",
    "        .output('mHR', 'mAP')\n",
    ")\n",
    "\n",
    "res = DataCollection(eval_pipe('./test/*/*.mp4'))\n",
    "res.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c9c999b",
   "metadata": {},
   "source": [
    "It's surprising that the performance is not affected a lot. Both mHR and mAP descrease by about 0.1 while the embedding size are reduced by 6 times (dimension from 768 to 128)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a71defe",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.12"
  },
  "vscode": {
   "interpreter": {
    "hash": "f7dd10cdbe9a9c71f7e71741efd428241b5f9fa0fecdd29ae07a5706cd5ff8a2"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
