{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "OsFaZscKSPvo"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
        "\n",
        "# Licensed to the Apache Software Foundation (ASF) under one\n",
        "# or more contributor license agreements. See the NOTICE file\n",
        "# distributed with this work for additional information\n",
        "# regarding copyright ownership. The ASF licenses this file\n",
        "# to you under the Apache License, Version 2.0 (the\n",
        "# \"License\"); you may not use this file except in compliance\n",
        "# with the License. You may obtain a copy of the License at\n",
        "#\n",
        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing,\n",
        "# software distributed under the License is distributed on an\n",
        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
        "# KIND, either express or implied. See the License for the\n",
        "# specific language governing permissions and limitations\n",
        "# under the License"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZUSiAR62SgO8"
      },
      "source": [
        "# Update ML models in running pipelines\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/automatic_model_refresh.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/colab_32px.png\" />Run in Google Colab</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/automatic_model_refresh.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/github_32px.png\" />View source on GitHub</a>\n",
        "  </td>\n",
        "</table>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tBtqF5UpKJNZ"
      },
      "source": [
        "This notebook demonstrates how to perform automatic model updates without stopping your Apache Beam pipeline.\n",
        "You can use side inputs to update your model in real time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
        "\n",
        "The pipeline in this notebook uses a RunInference `PTransform` with TensorFlow machine learning (ML) models to run inference on images. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
        "\n",
        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for file updates that match the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SPuXFowiTpWx"
      },
      "source": [
        "## Before you begin\n",
        "Install the dependencies required to run this notebook.\n",
        "\n",
        "To use RunInference with side inputs for automatic model updates, use Apache Beam version 2.46.0 or later."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1RyTYsFEIOlA"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "!pip install apache_beam[interactive,gcp]>=2.46.0  tensorflow==2.15.0 tensorflow_hub==0.16.1 keras==2.15.0 Pillow==11.0.0 --quiet"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Rs4cwwNrIV9H"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "# Imports required for the notebook.\n",
        "import logging\n",
        "import time\n",
        "import os\n",
        "from typing import Iterable\n",
        "from typing import Tuple\n",
        "\n",
        "import apache_beam as beam\n",
        "from apache_beam.ml.inference.base import PredictionResult\n",
        "from apache_beam.ml.inference.base import RunInference\n",
        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
        "from apache_beam.options.pipeline_options import PipelineOptions\n",
        "from apache_beam.options.pipeline_options import SetupOptions\n",
        "from apache_beam.options.pipeline_options import StandardOptions\n",
        "from apache_beam.options.pipeline_options import WorkerOptions\n",
        "from apache_beam.transforms.periodicsequence import PeriodicImpulse\n",
        "import numpy\n",
        "from PIL import Image\n",
        "import tensorflow as tf"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jAKpPcmmGm03"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "# Authenticate to your Google Cloud account.\n",
        "def auth_to_colab():\n",
        "  from google.colab import auth\n",
        "  auth.authenticate_user()\n",
        "\n",
        "auth_to_colab()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ORYNKhH3WQyP"
      },
      "source": [
        "## Configure the runner\n",
        "\n",
        "This pipeline uses the Dataflow Runner. To run the pipeline, you need to complete the following tasks:\n",
        "\n",
        "* Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
        "* Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode.\n",
        "\n",
        "In the following code, replace `BUCKET_NAME` with the the name of your Cloud Storage bucket."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wWjbnq6X-4uE"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "options = PipelineOptions()\n",
        "options.view_as(StandardOptions).streaming = True\n",
        "\n",
        "# Replace with your bucket name.\n",
        "BUCKET_NAME = '<BUCKET_NAME>' # @param {type:'string'} \n",
        "os.environ['BUCKET_NAME'] = BUCKET_NAME\n",
        "\n",
        "# Provide required pipeline options for the Dataflow Runner.\n",
        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
        "\n",
        "# Set the project to the default project in your current Google Cloud environment.\n",
        "PROJECT_NAME = '<PROJECT_NAME>' # @param {type:'string'}\n",
        "options.view_as(GoogleCloudOptions).project = PROJECT_NAME\n",
        "\n",
        "# Set the Google Cloud region that you want to run Dataflow in.\n",
        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
        "\n",
        "# IMPORTANT: Replace BUCKET_NAME with the the name of your Cloud Storage bucket.\n",
        "dataflow_gcs_location = \"gs://%s/dataflow\" % BUCKET_NAME\n",
        "\n",
        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
        "\n",
        "\n",
        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
        "\n",
        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
        "\n",
        "options.view_as(SetupOptions).save_main_session = True\n",
        "\n",
        "# Launching Dataflow with only one worker might result in processing delays due to\n",
        "# initial input processing. This could further postpone the side input model updates.\n",
        "# To expedite the model update process, it's recommended to set num_workers>1.\n",
        "# https://github.com/apache/beam/issues/28776\n",
        "options.view_as(WorkerOptions).num_workers = 5"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HTJV8pO2Wcw4"
      },
      "source": [
        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "lEy4PkluWbdm"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "# In a requirements file, define the dependencies required for the pipeline.\n",
        "!printf 'tensorflow==2.15.0\\ntensorflow_hub==0.16.1\\nkeras==2.15.0\\nPillow==11.0.0' > ./requirements.txt\n",
        "# Install the pipeline dependencies on Dataflow.\n",
        "options.view_as(SetupOptions).requirements_file = './requirements.txt'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_AUNH_GJk_NE"
      },
      "source": [
        "## Use the TensorFlow model handler\n",
        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on [ImageNet](https://www.image-net.org/).\n",
        "\n",
        "\n",
        "For the Dataflow runner, you need to store the model in a remote location that the Apache Beam pipeline can access. For this example, download the `ResNet101` model, and upload it to the Google Cloud Storage bucket.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ibkWiwVNvyrn"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "model = tf.keras.applications.resnet.ResNet101()\n",
        "model.save('resnet101_weights_tf_dim_ordering_tf_kernels.keras')\n",
        "# After saving the model locally, upload the model to GCS bucket and provide that gcs bucket `URI` as `model_uri` to the `TFModelHandler`\n",
        "!gsutil cp resnet101_weights_tf_dim_ordering_tf_kernels.keras gs://${BUCKET_NAME}/dataflow/resnet101_weights_tf_dim_ordering_tf_kernels.keras"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kkSnsxwUk-Sp"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "model_handler = TFModelHandlerTensor(\n",
        "    model_uri=dataflow_gcs_location + \"/resnet101_weights_tf_dim_ordering_tf_kernels.keras\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tZH0r0sL-if5"
      },
      "source": [
        "## Preprocess images\n",
        "\n",
        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dU5imgTt-8Ne"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "def preprocess_image(image_name, image_dir):\n",
        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
        "  img = Image.open(img).resize((224, 224))\n",
        "  img = numpy.array(img) / 255.0\n",
        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
        "  return img_tensor"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6V5tJxO6-gyt"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "class PostProcessor(beam.DoFn):\n",
        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
        "  Returns predicted label.\n",
        "  \"\"\"\n",
        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
        "    labels_path = tf.keras.utils.get_file(\n",
        "        'ImageNetLabels.txt',\n",
        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
        "    )\n",
        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
        "    predicted_class_name = imagenet_labels[predicted_class]\n",
        "    yield predicted_class_name.title(), element.model_id"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GpdKk72O_NXT"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "# Define the pipeline object.\n",
        "pipeline = beam.Pipeline(options=options)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "elZ53uxc_9Hv"
      },
      "source": [
        "Next, review the pipeline steps and examine the code.\n",
        "\n",
        "### Pipeline steps\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "305tkV2sAD-S"
      },
      "source": [
        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
        "\n",
        "   In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "vUFStz66_Tbb"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse (will run for 20 minutes).\n",
        "main_input_fire_interval = 60 # interval in seconds at which the main input PCollection is emitted.\n",
        "side_input_fire_interval = 60 # interval in seconds at which the side input PCollection is emitted.\n",
        "\n",
        "periodic_impulse = (\n",
        "      pipeline\n",
        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
        "          start_timestamp=start_timestamp,\n",
        "          stop_timestamp=end_timestamp,\n",
        "          fire_interval=main_input_fire_interval))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8-sal2rFAxP2"
      },
      "source": [
        "2. To read and preprocess the images, use the `preprocess_image` function. This example uses `Cat-with-beanie.jpg` for all inferences.\n",
        "\n",
        "  **Note**: The image used for prediction is licensed in CC-BY. The creator is listed in the [LICENSE.txt](https://storage.googleapis.com/apache-beam-samples/image_captioning/LICENSE.txt) file."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gW4cE8bhXS-d"
      },
      "source": [
        "![download.png]()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dGg11TpV_aV6"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
        "      | \"ReadImage\" >> beam.Map(lambda image_name: preprocess_image(\n",
        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eB0-ewd-BCKE"
      },
      "source": [
        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
        "  * `model_metadata_pcoll` is a side input `PCollection` to the RunInference `PTransform`. This side input updates the `model_uri` in the `model_handler` while the Apache Beam pipeline runs.\n",
        "  * Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.keras` files. In this case, the `file_pattern` is `'gs://BUCKET_NAME/dataflow/*keras'`.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_AjvvexJ_hUq"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        " # The side input used to watch for the .keras file and update the model_uri of the TFModelHandlerTensor.\n",
        "file_pattern = dataflow_gcs_location + '/*.keras'\n",
        "side_input_pcoll = (\n",
        "      pipeline\n",
        "      | \"WatchFilePattern\" >> WatchFilePattern(file_pattern=file_pattern,\n",
        "                                                interval=side_input_fire_interval,\n",
        "                                                stop_timestamp=end_timestamp))\n",
        "inferences = (\n",
        "      image_data\n",
        "      | \"ApplyWindowing\" >> beam.WindowInto(beam.window.FixedWindows(10))\n",
        "      | \"RunInference\" >> RunInference(model_handler=model_handler,\n",
        "                                      model_metadata_pcoll=side_input_pcoll))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lTA4wRWNDVis"
      },
      "source": [
        "4. Post-process the `PredictionResult` object.\n",
        "When the inference is complete, RunInference outputs a `PredictionResult` object that contains the fields `example`, `inference`, and `model_id`. The `model_id` field identifies the model used to run the inference. The `PostProcessor` returns the predicted label and the model ID used to run the inference on the predicted label."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9TB76fo-_vZJ"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "post_processor = (\n",
        "    inferences\n",
        "    | \"PostProcessResults\" >> beam.ParDo(PostProcessor())\n",
        "    | \"LogResults\" >> beam.Map(logging.info))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wYp-mBHHjOjA"
      },
      "source": [
        "### Watch for the model update\n",
        "\n",
        "After the pipeline starts processing data, when you see output emitted from the RunInference `PTransform`, upload a `resnet152` model saved in the `.keras` format to a Google Cloud Storage bucket location that matches the `file_pattern` you defined earlier.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FpUfNBSWH9Xy"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "model = tf.keras.applications.resnet.ResNet152()\n",
        "model.save('resnet152_weights_tf_dim_ordering_tf_kernels.keras')\n",
        "!gsutil cp resnet152_weights_tf_dim_ordering_tf_kernels.keras gs://${BUCKET_NAME}/resnet152_weights_tf_dim_ordering_tf_kernels.keras"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_ty03jDnKdKR"
      },
      "source": [
        "## Run the pipeline\n",
        "\n",
        "Use the following code to run the pipeline."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wd0VJLeLEWBU"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n"
          ]
        }
      ],
      "source": [
        "# Run the pipeline.\n",
        "result = pipeline.run().wait_until_finish()"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "include_colab_link": true,
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
