{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c8d4afc9",
   "metadata": {},
   "source": [
    "# Deploying FastKafka using Docker"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7311d5f",
   "metadata": {},
   "source": [
    "## Building a Docker Image\n",
    "\n",
    "To build a Docker image for a FastKafka project, we need the following items:\n",
    "\n",
    "1. A library that is built using FastKafka.\n",
    "2. A file in which the requirements are specified. This could be a requirements.txt file, a setup.py file, or even a wheel file.\n",
    "3. A Dockerfile to build an image that will include the two files mentioned above."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e7b9e7a",
   "metadata": {},
   "source": [
    "### Creating FastKafka Code\n",
    "\n",
    "Let's create a `FastKafka`-based application and write it to the `application.py` file based on the [tutorial](/docs#tutorial).\n",
    "\n",
    "```python\n",
    "# content of the \"application.py\" file\n",
    "\n",
    "from contextlib import asynccontextmanager\n",
    "\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "from fastkafka import FastKafka\n",
    "\n",
    "ml_models = {}\n",
    "\n",
    "\n",
    "@asynccontextmanager\n",
    "async def lifespan(app: FastKafka):\n",
    "    # Load the ML model\n",
    "    X, y = load_iris(return_X_y=True)\n",
    "    ml_models[\"iris_predictor\"] = LogisticRegression(random_state=0, max_iter=500).fit(\n",
    "        X, y\n",
    "    )\n",
    "    yield\n",
    "    # Clean up the ML models and release the resources\n",
    "    ml_models.clear()\n",
    "\n",
    "\n",
    "from pydantic import BaseModel, NonNegativeFloat, Field\n",
    "\n",
    "class IrisInputData(BaseModel):\n",
    "    sepal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal length in cm\"\n",
    "    )\n",
    "    sepal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal width in cm\"\n",
    "    )\n",
    "    petal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal length in cm\"\n",
    "    )\n",
    "    petal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal width in cm\"\n",
    "    )\n",
    "\n",
    "\n",
    "class IrisPrediction(BaseModel):\n",
    "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")\n",
    "\n",
    "from fastkafka import FastKafka\n",
    "\n",
    "kafka_brokers = {\n",
    "    \"localhost\": {\n",
    "        \"url\": \"localhost\",\n",
    "        \"description\": \"local development kafka broker\",\n",
    "        \"port\": 9092,\n",
    "    },\n",
    "    \"production\": {\n",
    "        \"url\": \"kafka.airt.ai\",\n",
    "        \"description\": \"production kafka broker\",\n",
    "        \"port\": 9092,\n",
    "        \"protocol\": \"kafka-secure\",\n",
    "        \"security\": {\"type\": \"plain\"},\n",
    "    },\n",
    "}\n",
    "\n",
    "kafka_app = FastKafka(\n",
    "    title=\"Iris predictions\",\n",
    "    kafka_brokers=kafka_brokers,\n",
    "    lifespan=lifespan,\n",
    ")\n",
    "\n",
    "@kafka_app.consumes(topic=\"input_data\", auto_offset_reset=\"latest\")\n",
    "async def on_input_data(msg: IrisInputData):\n",
    "    species_class = ml_models[\"iris_predictor\"].predict(\n",
    "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
    "    )[0]\n",
    "\n",
    "    await to_predictions(species_class)\n",
    "\n",
    "\n",
    "@kafka_app.produces(topic=\"predictions\")\n",
    "async def to_predictions(species_class: int) -> IrisPrediction:\n",
    "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
    "\n",
    "    prediction = IrisPrediction(species=iris_species[species_class])\n",
    "    return prediction\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44a9370f",
   "metadata": {},
   "source": [
    "### Creating requirements.txt file\n",
    "\n",
    "The above code only requires FastKafka. So, we will add only that to the `requirements.txt` file, but you can add additional requirements to it as well.\n",
    "\n",
    "```txt\n",
    "fastkafka>=0.3.0\n",
    "```\n",
    "\n",
    "Here we are using `requirements.txt` to store the project's dependencies. However, other methods like `setup.py`, `pipenv`, and `wheel` files can also be used. `setup.py` is commonly used for packaging and distributing Python modules, while `pipenv` is a tool used for managing virtual environments and package dependencies. `wheel` files are built distributions of Python packages that can be installed with pip."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "911436ab",
   "metadata": {},
   "source": [
    "### Creating Dockerfile\n",
    "\n",
    "```{ .dockerfile .annotate }\n",
    "# (1)\n",
    "FROM python:3.9-slim-bullseye\n",
    "# (2)\n",
    "WORKDIR /project\n",
    "# (3)\n",
    "COPY application.py requirements.txt /project/\n",
    "# (4)\n",
    "RUN pip install --no-cache-dir --upgrade -r /project/requirements.txt\n",
    "# (5)\n",
    "CMD [\"fastkafka\", \"run\", \"--num-workers\", \"2\", \"--kafka-broker\", \"production\", \"application:kafka_app\"]\n",
    "```\n",
    "\n",
    "1. Start from the official Python base image.\n",
    "\n",
    "2. Set the current working directory to `/project`.\n",
    "\n",
    "    This is where we'll put the `requirements.txt` file and the `application.py` file.\n",
    "\n",
    "3. Copy the `application.py` file and `requirements.txt` file inside the `/project` directory.\n",
    "\n",
    "4. Install the package dependencies in the requirements file.\n",
    "\n",
    "    The `--no-cache-dir` option tells `pip` to not save the downloaded packages locally, as that is only if `pip` was going to be run again to install the same packages, but that's not the case when working with containers.\n",
    "\n",
    "    The `--upgrade` option tells `pip` to upgrade the packages if they are already installed.\n",
    "\n",
    "5. Set the **command** to run the `fastkafka run` command.\n",
    "\n",
    "    `CMD` takes a list of strings, each of these strings is what you would type in the command line separated by spaces.\n",
    "\n",
    "    This command will be run from the **current working directory**, the same `/project` directory you set above with `WORKDIR /project`.\n",
    "\n",
    "    We supply additional parameters `--num-workers` and `--kafka-broker` for the run command. Finally, we specify the location of our FastKafka application as a command argument.\n",
    "    \n",
    "    To learn more about `fastkafka run` command please check the [CLI docs](../../cli/fastkafka/#fastkafka-run).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ad51d39",
   "metadata": {},
   "source": [
    "### Build the Docker Image\n",
    "\n",
    "Now that all the files are in place, let's build the container image.\n",
    "\n",
    "1. Go to the project directory (where your `Dockerfile` is, containing your `application.py` file).\n",
    "2. Run the following command to build the image:\n",
    "    \n",
    "    ```cmd\n",
    "    docker build -t fastkafka_project_image .\n",
    "    ```\n",
    "    \n",
    "    This command will create a docker image with the name `fastkafka_project_image` and the `latest` tag.\n",
    "   \n",
    "That's it! You have now built a docker image for your FastKafka project."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bfe73a22",
   "metadata": {},
   "source": [
    "### Start the Docker Container\n",
    "\n",
    "Run a container based on the built image:\n",
    "```cmd\n",
    "docker run -d --name fastkafka_project_container fastkafka_project_image\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eec10a57",
   "metadata": {},
   "source": [
    "## Additional Security\n",
    "\n",
    "`Trivy` is an open-source tool that scans Docker images for vulnerabilities. It can be integrated into your CI/CD pipeline to ensure that your images are secure and free from known vulnerabilities. Here's how you can use `trivy` to scan your `fastkafka_project_image`:\n",
    "\n",
    "1. Install `trivy` on your local machine by following the instructions provided in the [official `trivy` documentation](https://aquasecurity.github.io/trivy/latest/getting-started/installation/).\n",
    "\n",
    "2. Run the following command to scan your fastkafka_project_image:\n",
    "    \n",
    "    ```cmd\n",
    "    trivy image fastkafka_project_image\n",
    "    ```\n",
    "    \n",
    "    This command will scan your `fastkafka_project_image` for any vulnerabilities and provide you with a report of its findings.\n",
    "\n",
    "3. Fix any vulnerabilities identified by `trivy`. You can do this by updating the vulnerable package to a more secure version or by using a different package altogether.\n",
    "\n",
    "4. Rebuild your `fastkafka_project_image` and repeat steps 2 and 3 until `trivy` reports no vulnerabilities.\n",
    "\n",
    "By using `trivy` to scan your Docker images, you can ensure that your containers are secure and free from known vulnerabilities."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9e2f403c",
   "metadata": {},
   "source": [
    "## Example repo\n",
    "\n",
    "A `FastKafka` based library which uses above mentioned Dockerfile to build a docker image can be found [here](https://github.com/airtai/sample_fastkafka_project/)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "python3",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
