{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# About Edit Demo\n",
        "This notebook will show you a demo of video editing implemented through the BMF framework: through the Module that realizes two features used subgraph, it provides the overlay and concat capabilities of multiple audio and video channels, and completes a complex video editing pipeline."
      ],
      "metadata": {
        "id": "F5VPsX1Lixae"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 1. Install BMF in Python environment."
      ],
      "metadata": {
        "id": "MjaNzZ3fse4C"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install BabitMF"
      ],
      "metadata": {
        "id": "71DnSUiI28te"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 2.The Engine layer of the BMF framework is uniformly implemented in C++ language. In Colab, when python calls the C++ library, the log of the C++ library layer will be hidden, so it is necessary to install and load the wurlitezer library to enable logs in the C++ layer."
      ],
      "metadata": {
        "id": "MtkenCr7syuG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install wurlitzer\n",
        "%load_ext wurlitzer"
      ],
      "metadata": {
        "id": "O1lA6mqGIC5m"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 3. Download a sample transcoded video (using Big Bunny as an example here) and watermark of XiGua."
      ],
      "metadata": {
        "id": "3nyirq96tBuA"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!gdown --fuzzy https://drive.google.com/file/d/1l8bDSrWn6643aDhyaocVStXdoUbVC3o2/view?usp=sharing -O big_bunny_10s_30fps.mp4\n",
        "!gdown --fuzzy https://drive.google.com/file/d/1VHxQdIStg1pr7sNlORhUDroIhm3sIcvy/view?usp=sharing -O xigua_prefix_logo_x.mov"
      ],
      "metadata": {
        "id": "GU5LOzPXE1GW"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 4. Write and implement Edit Demo\n",
        "### Overall, Edit Demo can be decomposed into three sub-processes:\n",
        "### 1. Implement the video_overlay subgraph module.\n",
        "### 2. Implement the video_concat subgraph module.\n",
        "### 3. Build the work pipeline of Edit Demo.\n",
        "We will analyze in detail the above three processes one by one."
      ],
      "metadata": {
        "id": "wEYaqf_ct5Oa"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Implement the video_overlay subgraph module\n",
        "The following code implements a video_overlay Module with Python, this module receives a source stream (the first element of a list by default) and several overlay streams, and configures options through custom parameters to implement a process that makes FFmpeg overlay operation between all incoming overlay streams and source stream. You may have questions: why does the video_overlay Module not directly inherit class Module, but inherit class SubGraph? In fact, this is a sub-function of the BMF framework: SubGraph. If you want to know more about the functions and mechanisms of SubGraph, please go to: link to SubGraph. Here, you only need to understand one thing: SubGraph essentially exists as a Module, and it also constructs a graph structure internally, and this graph structure will participate in the calculation of the overall pipeline as the content of the Module.\n",
        "\n",
        "The option of video_overlay has two important properties: source and overlays:\n",
        "\n",
        "**source**: A set of descriptions of the original code stream (or overlay code stream) and related feature descriptions, dictionary structure.\n",
        "\n",
        "**overlays**: a list, each element in the list is a description dictionary corresponding to the overlay stream, such a dictionary can have multiple.\n",
        "\n",
        "In the create_graph function logic, the program builds a graph for overlay operations on a source stream and multiple overlay streams."
      ],
      "metadata": {
        "id": "S10idpjhuhzc"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import bmf\n",
        "from bmf import bmf_sync, Packet\n",
        "from bmf import SubGraph\n",
        "\n",
        "'''\n",
        "Option example:\n",
        "    option = {\n",
        "        \"source\": {\n",
        "            \"start\": 0,\n",
        "            \"duration\": 5,\n",
        "            \"width\": 640,\n",
        "            \"height\": 480\n",
        "        },\n",
        "        \"overlays\": [\n",
        "            {\n",
        "                \"start\": 0,\n",
        "                \"duration\": 2,\n",
        "                \"width\": 300,\n",
        "                \"height\": 200,\n",
        "                \"pox_x\": 0,\n",
        "                \"pox_y\": 0,\n",
        "                \"loop\": -1,\n",
        "                \"repeat_last\": 0\n",
        "            },\n",
        "            {\n",
        "                \"start\": 2,\n",
        "                \"duration\": 2,\n",
        "                \"width\": 300,\n",
        "                \"height\": 200,\n",
        "                \"pox_x\": 'W-300',\n",
        "                \"pox_y\": 0,\n",
        "                \"loop\": 0,\n",
        "                \"repeat_last\": 1\n",
        "            }\n",
        "        ]\n",
        "    }\n",
        "'''\n",
        "\n",
        "\n",
        "class video_overlay(SubGraph):\n",
        "    def create_graph(self, option=None):\n",
        "        # create source stream\n",
        "        self.inputs.append('source')\n",
        "        source_stream = self.graph.input_stream('source')\n",
        "        # create overlay stream\n",
        "        overlay_streams = []\n",
        "        for (i, _) in enumerate(option['overlays']):\n",
        "            self.inputs.append('overlay_' + str(i))\n",
        "            overlay_streams.append(self.graph.input_stream('overlay_' + str(i)))\n",
        "\n",
        "        # pre-processing for source layer\n",
        "        info = option['source']\n",
        "        output_stream = (\n",
        "            source_stream.scale(info['width'], info['height'])\n",
        "                .trim(start=info['start'], duration=info['duration'])\n",
        "                .setpts('PTS-STARTPTS')\n",
        "        )\n",
        "\n",
        "        # overlay processing\n",
        "        for (i, overlay_stream) in enumerate(overlay_streams):\n",
        "            overlay_info = option['overlays'][i]\n",
        "\n",
        "            # overlay layer pre-processing\n",
        "            p_overlay_stream = (\n",
        "                overlay_stream.scale(overlay_info['width'], overlay_info['height'])\n",
        "                    .loop(loop=overlay_info['loop'], size=10000)\n",
        "                    .setpts('PTS+%f/TB' % (overlay_info['start']))\n",
        "            )\n",
        "\n",
        "            # calculate overlay parameter\n",
        "            x = 'if(between(t,%f,%f),%s,NAN)' % (overlay_info['start'],\n",
        "                                                 overlay_info['start'] + overlay_info['duration'],\n",
        "                                                 str(overlay_info['pox_x']))\n",
        "            y = 'if(between(t,%f,%f),%s,NAN)' % (overlay_info['start'],\n",
        "                                                 overlay_info['start'] + overlay_info['duration'],\n",
        "                                                 str(overlay_info['pox_y']))\n",
        "            if overlay_info['loop'] == -1:\n",
        "                repeat_last = 0\n",
        "                shortest = 1\n",
        "            else:\n",
        "                repeat_last = overlay_info['repeat_last']\n",
        "                shortest = 1\n",
        "\n",
        "            # do overlay\n",
        "            output_stream = (\n",
        "                output_stream.overlay(p_overlay_stream, x=x, y=y,\n",
        "                                      repeatlast=repeat_last)\n",
        "            )\n",
        "\n",
        "        # finish creating graph\n",
        "        self.output_streams = self.finish_create_graph([output_stream])"
      ],
      "metadata": {
        "id": "hVZ8L3-dIHH2"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Implement the subgraph module:video_concat\n",
        "\n",
        "Similar to the video_overlay module described above, video_concat is also a subgraph, and the option with protocol parameters has the following important attributes:\n",
        "\n",
        "width: the width of the source video\n",
        "\n",
        "height: the height of the source video\n",
        "\n",
        "has_audio: whether audio stream is required\n",
        "\n",
        "video_list: an array, each element is a description of a video stream:\n",
        "\n",
        "1. **start**: The video output stream requires the start time\n",
        "2. **duration**: The duration required by the video output stream\n",
        "3. **transition_time**: the start time of the transition operation\n",
        "\n",
        "In the processing logic, the module will loop and iteratively process the video stream. For each incoming video stream, first call the scale filter once to scale the video stream to the corresponding resolution, and then call the split filter to copy one video stream into two. Next, do Trim and Setpts operations on the first video stream, then check whether there is a prev_transition video stream, if yes, perform an overlay operation on the two video streams and store them in the concat array, if not, do not process. Finally, another video stream that has just been split, its start time will be set with (duration - transition_time), transition_time will be the duration of the video stream with Trim operation and SetPts operation before, and finally scaled to a fixed resolution of 200x200 and saved as the prev_transition video stream.\n",
        "\n",
        "\n",
        "For audio streams, if the user sets has_audio = 1, the module will also concat each audio stream by default, and the duration of each audio stream used for concat will align with the duration of the corresponding video stream."
      ],
      "metadata": {
        "id": "zFhSu1wFvGGC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "Option example:\n",
        "    option = {\n",
        "        \"width\": 640,\n",
        "        \"height\": 480,\n",
        "        \"has_audio\": 1,\n",
        "        \"video_list\": [\n",
        "            {\n",
        "                \"start\": 0,\n",
        "                \"duration\": 2,\n",
        "                \"transition_time\": 1,\n",
        "                \"transition_mode\": 1\n",
        "            },\n",
        "            {\n",
        "                \"start\": 0,\n",
        "                \"duration\": 4,\n",
        "                \"transition_time\": 1,\n",
        "                \"transition_mode\": 1\n",
        "            },\n",
        "            {\n",
        "                \"start\": 3,\n",
        "                \"duration\": 4,\n",
        "                \"transition_time\": 1,\n",
        "                \"transition_mode\": 1\n",
        "            }\n",
        "        ]\n",
        "    }\n",
        "'''\n",
        "\n",
        "\n",
        "class video_concat(SubGraph):\n",
        "    def create_graph(self, option=None):\n",
        "        video_stream_cnt = len(option['video_list'])\n",
        "\n",
        "        # here we assume if have audio, audio stream count is equal to video\n",
        "        if option['has_audio'] == 1:\n",
        "            audio_stream_cnt = video_stream_cnt\n",
        "        else:\n",
        "            audio_stream_cnt = 0\n",
        "\n",
        "        # process video streams\n",
        "        concat_video_streams = []\n",
        "        prev_transition_stream = None\n",
        "        for i in range(video_stream_cnt):\n",
        "            # create a input stream\n",
        "            stream_name = 'video_' + str(i)\n",
        "            self.inputs.append(stream_name)\n",
        "            video_stream = (\n",
        "                self.graph.input_stream(stream_name)\n",
        "                    .scale(option['width'], option['height'])\n",
        "            )\n",
        "\n",
        "            if option['video_list'][i]['transition_time'] > 0 and i < video_stream_cnt - 1:\n",
        "                split_stream = video_stream.split()\n",
        "                video_stream = split_stream[0]\n",
        "                transition_stream = split_stream[1]\n",
        "            else:\n",
        "                transition_stream = None\n",
        "\n",
        "            # prepare concat stream\n",
        "            info = option['video_list'][i]\n",
        "            trim_time = info['duration'] - info['transition_time']\n",
        "            concat_stream = (\n",
        "                video_stream.trim(start=info['start'], duration=trim_time)\n",
        "                    .setpts('PTS-STARTPTS')\n",
        "            )\n",
        "\n",
        "            # do transition, here use overlay instead\n",
        "            if prev_transition_stream is not None:\n",
        "                concat_stream = concat_stream.overlay(prev_transition_stream, repeatlast=0)\n",
        "\n",
        "            # add to concat stream\n",
        "            concat_video_streams.append(concat_stream)\n",
        "\n",
        "            # prepare transition stream for next stream\n",
        "            if transition_stream is not None:\n",
        "                prev_transition_stream = (\n",
        "                    transition_stream.trim(start=trim_time, duration=info['transition_time'])\n",
        "                        .setpts('PTS-STARTPTS')\n",
        "                        .scale(200, 200)\n",
        "                )\n",
        "\n",
        "        # concat videos\n",
        "        concat_video_stream = bmf.concat(*concat_video_streams, n=video_stream_cnt, v=1, a=0)\n",
        "\n",
        "        # process audio\n",
        "        # actually, we can use another sub-graph module to process audio, we combine it\n",
        "        # in one module to show how to process multi-output in sub-graph\n",
        "        concat_audio_stream = None\n",
        "        if audio_stream_cnt > 0:\n",
        "            concat_audio_streams = []\n",
        "            for i in range(audio_stream_cnt):\n",
        "                # create a input stream\n",
        "                stream_name = 'audio_' + str(i)\n",
        "                self.inputs.append(stream_name)\n",
        "\n",
        "                # pre-processing for audio stream\n",
        "                info = option['video_list'][i]\n",
        "                trim_time = info['duration'] - info['transition_time']\n",
        "                audio_stream = (\n",
        "                    self.graph.input_stream(stream_name)\n",
        "                        .atrim(start=info['start'], duration=trim_time)\n",
        "                        .asetpts('PTS-STARTPTS')\n",
        "                        .afade(t='in', st=0, d=2)\n",
        "                        .afade(t='out', st=info['duration'] - 2, d=2)\n",
        "                )\n",
        "\n",
        "                # add to concat stream\n",
        "                concat_audio_streams.append(audio_stream)\n",
        "\n",
        "            # concat audio\n",
        "            concat_audio_stream = bmf.concat(*concat_audio_streams, n=audio_stream_cnt, v=0, a=1)\n",
        "\n",
        "        # finish creating graph\n",
        "        self.output_streams = self.finish_create_graph([concat_video_stream, concat_audio_stream])"
      ],
      "metadata": {
        "id": "NyK90WvhvFuG"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Implement a complex video editing pipeline\n",
        "\n",
        "This code calls the two modules implemented above, first creates three video streams, overlays them with the logo of Xigua Video respectively, then sends the three processed video streams and corresponding audio streams to video_concat module, a complex audio and video editing Demo is implemented, in which the topology of the bmf graph can be represented by the following figure:\n",
        "\n",
        "![edit_demo.drawio.png]()\n",
        "\n",
        "\n"
      ],
      "metadata": {
        "id": "fu9atAlyxizF"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "input_video_path = \"./big_bunny_10s_30fps.mp4\"\n",
        "logo_path = \"./xigua_prefix_logo_x.mov\"\n",
        "output_path = \"./complex_edit.mp4\"\n",
        "dump_graph = 0\n",
        "# create graph\n",
        "duration = 10\n",
        "\n",
        "overlay_option = {\n",
        "    \"dump_graph\": dump_graph,\n",
        "    \"source\": {\n",
        "        \"start\": 0,\n",
        "        \"duration\": duration,\n",
        "        \"width\": 1280,\n",
        "        \"height\": 720\n",
        "    },\n",
        "    \"overlays\": [\n",
        "        {\n",
        "            \"start\": 0,\n",
        "            \"duration\": duration,\n",
        "            \"width\": 300,\n",
        "            \"height\": 200,\n",
        "            \"pox_x\": 0,\n",
        "            \"pox_y\": 0,\n",
        "            \"loop\": 0,\n",
        "            \"repeat_last\": 1\n",
        "        }\n",
        "    ]\n",
        "}\n",
        "\n",
        "concat_option = {\n",
        "    \"dump_graph\": dump_graph,\n",
        "    \"width\": 1280,\n",
        "    \"height\": 720,\n",
        "    # if have audio input\n",
        "    \"has_audio\": 1,\n",
        "    \"video_list\": [\n",
        "        {\n",
        "            \"start\": 0,\n",
        "            \"duration\": duration,\n",
        "            \"transition_time\": 2,\n",
        "            \"transition_mode\": 1\n",
        "        },\n",
        "        {\n",
        "            \"start\": 0,\n",
        "            \"duration\": duration,\n",
        "            \"transition_time\": 2,\n",
        "            \"transition_mode\": 1\n",
        "        },\n",
        "        {\n",
        "            \"start\": 0,\n",
        "            \"duration\": duration,\n",
        "            \"transition_time\": 2,\n",
        "            \"transition_mode\": 1\n",
        "        }\n",
        "    ]\n",
        "}\n",
        "\n",
        "# create graph\n",
        "my_graph = bmf.graph({\n",
        "    \"dump_graph\": dump_graph\n",
        "})\n",
        "\n",
        "# three logo video\n",
        "logo_1 = my_graph.decode({'input_path': logo_path})['video']\n",
        "logo_2 = my_graph.decode({'input_path': logo_path})['video']\n",
        "logo_3 = my_graph.decode({'input_path': logo_path})['video']\n",
        "\n",
        "# three videos\n",
        "video1 = my_graph.decode({'input_path': input_video_path})\n",
        "video2 = my_graph.decode({'input_path': input_video_path})\n",
        "video3 = my_graph.decode({'input_path': input_video_path})\n",
        "\n",
        "# do overlay\n",
        "overlay_streams = list()\n",
        "overlay_streams.append(bmf.module([video1['video'], logo_1], 'video_overlay', overlay_option, entry='__main__.video_overlay')[0])\n",
        "overlay_streams.append(bmf.module([video2['video'], logo_2], 'video_overlay', overlay_option, entry='__main__.video_overlay')[0])\n",
        "overlay_streams.append(bmf.module([video3['video'], logo_3], 'video_overlay', overlay_option, entry='__main__.video_overlay')[0])\n",
        "\n",
        "# do concat\n",
        "concat_streams = (\n",
        "    bmf.module([\n",
        "        overlay_streams[0],\n",
        "        overlay_streams[1],\n",
        "        overlay_streams[2],\n",
        "        video1['audio'],\n",
        "        video2['audio'],\n",
        "        video3['audio']\n",
        "    ], 'video_concat', concat_option, entry='__main__.video_concat')\n",
        ")\n",
        "\n",
        "# encode\n",
        "(\n",
        "    bmf.encode(concat_streams[0], concat_streams[1], {\n",
        "        \"output_path\": output_path,\n",
        "        \"video_params\": {\n",
        "            \"width\": 1280,\n",
        "            \"height\": 720,\n",
        "            'codec': 'h264'\n",
        "        }\n",
        "    }).run()\n",
        ")"
      ],
      "metadata": {
        "id": "vM3JZjmXHO-k"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 5. Display the video streams before and after processing."
      ],
      "metadata": {
        "id": "nzn35ZPZvhyR"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from IPython.display import HTML\n",
        "from base64 import b64encode\n",
        "\n",
        "def show_video(video_path, video_width = 800):\n",
        "  video_file = open(video_path, \"r+b\").read()\n",
        "  video_url = f\"data:video/mp4;base64,{b64encode(video_file).decode()}\"\n",
        "  return f\"\"\"\n",
        "  <video width={video_width} controls>\n",
        "    <source src=\"{video_url}\">\n",
        "  </video>\n",
        "  \"\"\"\n",
        "\n",
        "video_url1 = show_video('big_bunny_10s_30fps.mp4')\n",
        "video_url2 = show_video('complex_edit.mp4')\n",
        "\n",
        "html = video_url1 + video_url2\n",
        "HTML(html)"
      ],
      "metadata": {
        "id": "zACHQOdMvbEZ"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}