{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f1e1fcb5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# | hide\n",
    "\n",
    "import sys\n",
    "from IPython.display import Markdown\n",
    "\n",
    "from tempfile import TemporaryDirectory\n",
    "from pathlib import Path\n",
    "\n",
    "from fastkafka import FastKafka\n",
    "from fastkafka._components.helpers import change_dir, _import_from_string"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1f7da504",
   "metadata": {},
   "outputs": [],
   "source": [
    "# | hide\n",
    "# | notest\n",
    "\n",
    "import nest_asyncio"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d6f852f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# | hide\n",
    "# | notest\n",
    "\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7a6a18bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "# | hide\n",
    "\n",
    "from pydantic import BaseModel, Field, NonNegativeFloat\n",
    "\n",
    "\n",
    "class IrisInputData(BaseModel):\n",
    "    sepal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal length in cm\"\n",
    "    )\n",
    "    sepal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal width in cm\"\n",
    "    )\n",
    "    petal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal length in cm\"\n",
    "    )\n",
    "    petal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal width in cm\"\n",
    "    )\n",
    "\n",
    "\n",
    "class IrisPrediction(BaseModel):\n",
    "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8eecb8b6",
   "metadata": {},
   "source": [
    "# Encoding and Decoding Kafka Messages with FastKafka"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "642ac9ba",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "\n",
    "\n",
    "1. A basic knowledge of `FastKafka` is needed to proceed with this guide. If you are not familiar with `FastKafka`, please go through the [tutorial](/docs#tutorial) first.\n",
    "2. `FastKafka` with its dependencies installed is needed. Please install `FastKafka` using the command - `pip install fastkafka`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "629f0460",
   "metadata": {},
   "source": [
    "## Ways to Encode and Decode Messages with FastKafka\n",
    "\n",
    "In python, by default, we send Kafka messages as bytes. Even if our message is a string, we convert it to bytes and then send it to Kafka topic. imilarly, while consuming messages, we consume them as bytes and then convert them to strings.\n",
    "\n",
    "In FastKafka, we specify message schema using Pydantic models as mentioned in [tutorial](/docs#messages):\n",
    "\n",
    "```python\n",
    "# Define Pydantic models for Kafka messages\n",
    "from pydantic import BaseModel, NonNegativeFloat, Field\n",
    "\n",
    "class IrisInputData(BaseModel):\n",
    "    sepal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal length in cm\"\n",
    "    )\n",
    "    sepal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal width in cm\"\n",
    "    )\n",
    "    petal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal length in cm\"\n",
    "    )\n",
    "    petal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal width in cm\"\n",
    "    )\n",
    "\n",
    "\n",
    "class IrisPrediction(BaseModel):\n",
    "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")\n",
    "```\n",
    "\n",
    "\n",
    "Then, we send and receive messages as instances of Pydantic models which we defined. So, FastKafka needs a way to encode/decode to these Pydantic model messages to bytes in order to send/receive messages to/from Kafka topics.\n",
    "\n",
    "The `@consumes` and `@produces` methods of FastKafka accept a parameter called `decoder`/`encoder` to decode/encode Kafka messages. FastKafka provides three ways to encode and decode messages:\n",
    "\n",
    "1. json - This is the default encoder/decoder option in FastKafka. While producing, this option converts our instance of Pydantic model messages to a JSON string and then converts it to bytes before sending it to the topic. While consuming, it converts bytes to a JSON string and then constructs an instance of Pydantic model from the JSON string.\n",
    "2. avro - This option uses Avro encoding/decoding to convert instances of Pydantic model messages to bytes while producing, and while consuming, it constructs an instance of Pydantic model from bytes.\n",
    "3. custom encoder/decoder - If you are not happy with the json or avro encoder/decoder options, you can write your own encoder/decoder functions and use them to encode/decode Pydantic messages."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37e1a5ee",
   "metadata": {},
   "source": [
    "## 1. Json encoder and decoder\n",
    "\n",
    "\n",
    "The default option in FastKafka is json encoder/decoder. This option, while producing, converts our instance of pydantic model messages to json string and then converts to bytes before sending it to the topics. While consuming it converts bytes to json string and then constructs instance of pydantic model from json string.\n",
    "\n",
    "We can use the application from [tutorial](/docs#running-the-service) as is, and it will use the json encoder/decoder by default. But, for clarity, let's modify it to explicitly accept the 'json' encoder/decoder parameter:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "32362501",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "\n",
       "```python\n",
       "# content of the \"application.py\" file\n",
       "\n",
       "from contextlib import asynccontextmanager\n",
       "\n",
       "from sklearn.datasets import load_iris\n",
       "from sklearn.linear_model import LogisticRegression\n",
       "\n",
       "from fastkafka import FastKafka\n",
       "\n",
       "ml_models = {}\n",
       "\n",
       "\n",
       "@asynccontextmanager\n",
       "async def lifespan(app: FastKafka):\n",
       "    # Load the ML model\n",
       "    X, y = load_iris(return_X_y=True)\n",
       "    ml_models[\"iris_predictor\"] = LogisticRegression(random_state=0, max_iter=500).fit(\n",
       "        X, y\n",
       "    )\n",
       "    yield\n",
       "    # Clean up the ML models and release the resources\n",
       "    ml_models.clear()\n",
       "\n",
       "\n",
       "from pydantic import BaseModel, NonNegativeFloat, Field\n",
       "\n",
       "class IrisInputData(BaseModel):\n",
       "    sepal_length: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Sepal length in cm\"\n",
       "    )\n",
       "    sepal_width: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Sepal width in cm\"\n",
       "    )\n",
       "    petal_length: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Petal length in cm\"\n",
       "    )\n",
       "    petal_width: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Petal width in cm\"\n",
       "    )\n",
       "\n",
       "\n",
       "class IrisPrediction(BaseModel):\n",
       "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")\n",
       "    \n",
       "from fastkafka import FastKafka\n",
       "\n",
       "kafka_brokers = {\n",
       "    \"localhost\": {\n",
       "        \"url\": \"localhost\",\n",
       "        \"description\": \"local development kafka broker\",\n",
       "        \"port\": 9092,\n",
       "    },\n",
       "    \"production\": {\n",
       "        \"url\": \"kafka.airt.ai\",\n",
       "        \"description\": \"production kafka broker\",\n",
       "        \"port\": 9092,\n",
       "        \"protocol\": \"kafka-secure\",\n",
       "        \"security\": {\"type\": \"plain\"},\n",
       "    },\n",
       "}\n",
       "\n",
       "kafka_app = FastKafka(\n",
       "    title=\"Iris predictions\",\n",
       "    kafka_brokers=kafka_brokers,\n",
       "    lifespan=lifespan,\n",
       ")\n",
       "\n",
       "@kafka_app.consumes(topic=\"input_data\", decoder=\"json\")\n",
       "async def on_input_data(msg: IrisInputData):\n",
       "    species_class = ml_models[\"iris_predictor\"].predict(\n",
       "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
       "    )[0]\n",
       "\n",
       "    await to_predictions(species_class)\n",
       "\n",
       "\n",
       "@kafka_app.produces(topic=\"predictions\", encoder=\"json\")\n",
       "async def to_predictions(species_class: int) -> IrisPrediction:\n",
       "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
       "\n",
       "    prediction = IrisPrediction(species=iris_species[species_class])\n",
       "    return prediction\n",
       "\n",
       "```\n"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# | echo: false\n",
    "\n",
    "kafka_app_source = \"\"\"\n",
    "from contextlib import asynccontextmanager\n",
    "\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "from fastkafka import FastKafka\n",
    "\n",
    "ml_models = {}\n",
    "\n",
    "\n",
    "@asynccontextmanager\n",
    "async def lifespan(app: FastKafka):\n",
    "    # Load the ML model\n",
    "    X, y = load_iris(return_X_y=True)\n",
    "    ml_models[\"iris_predictor\"] = LogisticRegression(random_state=0, max_iter=500).fit(\n",
    "        X, y\n",
    "    )\n",
    "    yield\n",
    "    # Clean up the ML models and release the resources\n",
    "    ml_models.clear()\n",
    "\n",
    "\n",
    "from pydantic import BaseModel, NonNegativeFloat, Field\n",
    "\n",
    "class IrisInputData(BaseModel):\n",
    "    sepal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal length in cm\"\n",
    "    )\n",
    "    sepal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal width in cm\"\n",
    "    )\n",
    "    petal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal length in cm\"\n",
    "    )\n",
    "    petal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal width in cm\"\n",
    "    )\n",
    "\n",
    "\n",
    "class IrisPrediction(BaseModel):\n",
    "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")\n",
    "    \n",
    "from fastkafka import FastKafka\n",
    "\n",
    "kafka_brokers = {\n",
    "    \"localhost\": {\n",
    "        \"url\": \"localhost\",\n",
    "        \"description\": \"local development kafka broker\",\n",
    "        \"port\": 9092,\n",
    "    },\n",
    "    \"production\": {\n",
    "        \"url\": \"kafka.airt.ai\",\n",
    "        \"description\": \"production kafka broker\",\n",
    "        \"port\": 9092,\n",
    "        \"protocol\": \"kafka-secure\",\n",
    "        \"security\": {\"type\": \"plain\"},\n",
    "    },\n",
    "}\n",
    "\n",
    "kafka_app = FastKafka(\n",
    "    title=\"Iris predictions\",\n",
    "    kafka_brokers=kafka_brokers,\n",
    "    lifespan=lifespan,\n",
    ")\n",
    "\n",
    "@kafka_app.consumes(topic=\"input_data\", decoder=\"json\")\n",
    "async def on_input_data(msg: IrisInputData):\n",
    "    species_class = ml_models[\"iris_predictor\"].predict(\n",
    "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
    "    )[0]\n",
    "\n",
    "    await to_predictions(species_class)\n",
    "\n",
    "\n",
    "@kafka_app.produces(topic=\"predictions\", encoder=\"json\")\n",
    "async def to_predictions(species_class: int) -> IrisPrediction:\n",
    "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
    "\n",
    "    prediction = IrisPrediction(species=iris_species[species_class])\n",
    "    return prediction\n",
    "\"\"\"\n",
    "\n",
    "Markdown(\n",
    "    f\"\"\"\n",
    "```python\n",
    "# content of the \"application.py\" file\n",
    "{kafka_app_source}\n",
    "```\n",
    "\"\"\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e7759193",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "23-07-05 08:19:23.742 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker._patch_consumers_and_producers(): Patching consumers and producers!\n",
      "23-07-05 08:19:23.742 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker starting\n",
      "23-07-05 08:19:23.754 [INFO] fastkafka._application.app: _create_producer() : created producer using the config: '{'bootstrap_servers': 'localhost:9092'}'\n",
      "23-07-05 08:19:23.754 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched start() called()\n",
      "23-07-05 08:19:23.763 [INFO] fastkafka._application.app: _create_producer() : created producer using the config: '{'bootstrap_servers': 'localhost:9092'}'\n",
      "23-07-05 08:19:23.764 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched start() called()\n",
      "23-07-05 08:19:23.764 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() starting...\n",
      "23-07-05 08:19:23.765 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer created using the following parameters: {'auto_offset_reset': 'earliest', 'max_poll_records': 100, 'bootstrap_servers': 'localhost:9092'}\n",
      "23-07-05 08:19:23.765 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched start() called()\n",
      "23-07-05 08:19:23.765 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer started.\n",
      "23-07-05 08:19:23.765 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched subscribe() called\n",
      "23-07-05 08:19:23.766 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer.subscribe(), subscribing to: ['input_data']\n",
      "23-07-05 08:19:23.766 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer subscribed.\n",
      "23-07-05 08:19:23.769 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() starting...\n",
      "23-07-05 08:19:23.769 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer created using the following parameters: {'auto_offset_reset': 'earliest', 'max_poll_records': 100, 'bootstrap_servers': 'localhost:9092'}\n",
      "23-07-05 08:19:23.769 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched start() called()\n",
      "23-07-05 08:19:23.769 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer started.\n",
      "23-07-05 08:19:23.770 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched subscribe() called\n",
      "23-07-05 08:19:23.770 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer.subscribe(), subscribing to: ['predictions']\n",
      "23-07-05 08:19:23.770 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer subscribed.\n",
      "23-07-05 08:19:27.765 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched stop() called\n",
      "23-07-05 08:19:27.765 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer stopped.\n",
      "23-07-05 08:19:27.766 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() finished.\n",
      "23-07-05 08:19:27.766 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched stop() called\n",
      "23-07-05 08:19:27.766 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched stop() called\n",
      "23-07-05 08:19:27.767 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer stopped.\n",
      "23-07-05 08:19:27.767 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() finished.\n",
      "23-07-05 08:19:27.767 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched stop() called\n",
      "23-07-05 08:19:27.767 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker stopping\n"
     ]
    }
   ],
   "source": [
    "# | hide\n",
    "\n",
    "with TemporaryDirectory() as d:\n",
    "    src_path = Path(d) / \"application.py\"\n",
    "    with open(src_path, \"w\") as source:\n",
    "        source.write(kafka_app_source)\n",
    "    with change_dir(d):\n",
    "        sys.path.insert(0, d)\n",
    "        from application import kafka_app, IrisInputData, IrisPrediction\n",
    "\n",
    "        from fastkafka.testing import Tester\n",
    "\n",
    "        msg = IrisInputData(\n",
    "            sepal_length=0.1,\n",
    "            sepal_width=0.2,\n",
    "            petal_length=0.3,\n",
    "            petal_width=0.4,\n",
    "        )\n",
    "\n",
    "        # Start Tester app and create InMemory Kafka broker for testing\n",
    "        async with Tester(kafka_app) as tester:\n",
    "            # Send IrisInputData message to input_data topic\n",
    "            await tester.to_input_data(msg)\n",
    "\n",
    "            # Assert that the kafka_app responded with IrisPrediction in predictions topic\n",
    "            await tester.awaited_mocks.on_predictions.assert_awaited_with(\n",
    "                IrisPrediction(species=\"setosa\"), timeout=3\n",
    "            )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "346d9c8b",
   "metadata": {},
   "source": [
    "In the above code, the `@kafka_app.consumes` decorator sets up a consumer for the \"input_data\" topic, using the 'json' decoder to convert the message payload to an instance of `IrisInputData`. The `@kafka_app.produces` decorator sets up a producer for the \"predictions\" topic, using the 'json' encoder to convert the instance of `IrisPrediction` to message payload."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75aca5c7",
   "metadata": {},
   "source": [
    "## 2. Avro encoder and decoder"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe4ef914",
   "metadata": {},
   "source": [
    "### What is Avro?\n",
    "\n",
    "Avro is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format. To learn more about the Apache Avro, please check out the [docs](https://avro.apache.org/docs/)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07c8c300",
   "metadata": {},
   "source": [
    "### Installing FastKafka with Avro dependencies\n",
    "\n",
    "\n",
    "`FastKafka` with dependencies for Apache Avro installed is needed to use avro encoder/decoder. Please install `FastKafka` with Avro support using the command - `pip install fastkafka[avro]`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ada1754",
   "metadata": {},
   "source": [
    "### Defining Avro Schema Using Pydantic Models\n",
    "\n",
    "\n",
    "By default, you can use Pydantic model to define your message schemas. FastKafka internally takes care of encoding and decoding avro messages, based on the Pydantic models.\n",
    "\n",
    "So, similar to the [tutorial](/docs#tutorial), the message schema will remain as it is.\n",
    "\n",
    "```python\n",
    "# Define Pydantic models for Avro messages\n",
    "from pydantic import BaseModel, NonNegativeFloat, Field\n",
    "\n",
    "class IrisInputData(BaseModel):\n",
    "    sepal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal length in cm\"\n",
    "    )\n",
    "    sepal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal width in cm\"\n",
    "    )\n",
    "    petal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal length in cm\"\n",
    "    )\n",
    "    petal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal width in cm\"\n",
    "    )\n",
    "\n",
    "\n",
    "class IrisPrediction(BaseModel):\n",
    "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")\n",
    "```\n",
    "\n",
    "No need to change anything to support avro. You can use existing Pydantic models as is."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1fbd3c20",
   "metadata": {},
   "source": [
    "### Reusing existing avro schema\n",
    "\n",
    "\n",
    "If you are using some other library to send and receive avro encoded messages, it is highly likely that you already have an Avro schema defined."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9e42789",
   "metadata": {},
   "source": [
    "#### Building pydantic models from avro schema dictionary\n",
    "\n",
    "\n",
    "Let's modify the above example and let's assume we have schemas already for `IrisInputData` and `IrisPrediction` which will look like below:\n",
    "\n",
    "```python\n",
    "iris_input_data_schema = {\n",
    "    \"type\": \"record\",\n",
    "    \"namespace\": \"IrisInputData\",\n",
    "    \"name\": \"IrisInputData\",\n",
    "    \"fields\": [\n",
    "        {\"doc\": \"Sepal length in cm\", \"type\": \"double\", \"name\": \"sepal_length\"},\n",
    "        {\"doc\": \"Sepal width in cm\", \"type\": \"double\", \"name\": \"sepal_width\"},\n",
    "        {\"doc\": \"Petal length in cm\", \"type\": \"double\", \"name\": \"petal_length\"},\n",
    "        {\"doc\": \"Petal width in cm\", \"type\": \"double\", \"name\": \"petal_width\"},\n",
    "    ],\n",
    "}\n",
    "iris_prediction_schema = {\n",
    "    \"type\": \"record\",\n",
    "    \"namespace\": \"IrisPrediction\",\n",
    "    \"name\": \"IrisPrediction\",\n",
    "    \"fields\": [{\"doc\": \"Predicted species\", \"type\": \"string\", \"name\": \"species\"}],\n",
    "}\n",
    "```\n",
    "\n",
    "We can easily construct pydantic models from avro schema using `avsc_to_pydantic` function which is included as part of `FastKafka` itself.\n",
    "\n",
    "```python\n",
    "from fastkafka.encoder import avsc_to_pydantic\n",
    "\n",
    "IrisInputData = avsc_to_pydantic(iris_input_data_schema)\n",
    "print(IrisInputData.model_fields)\n",
    "\n",
    "IrisPrediction = avsc_to_pydantic(iris_prediction_schema)\n",
    "print(IrisPrediction.model_fields)\n",
    "```\n",
    "\n",
    "The above code will convert avro schema to pydantic models and will print pydantic models' fields. The output of the above is:\n",
    "\n",
    "```txt\n",
    "{'sepal_length': ModelField(name='sepal_length', type=float, required=True),\n",
    " 'sepal_width': ModelField(name='sepal_width', type=float, required=True),\n",
    " 'petal_length': ModelField(name='petal_length', type=float, required=True),\n",
    " 'petal_width': ModelField(name='petal_width', type=float, required=True)}\n",
    " \n",
    " {'species': ModelField(name='species', type=str, required=True)}\n",
    "```\n",
    "\n",
    "This is exactly same as manually defining the pydantic models ourselves. You don't have to worry about not making any mistakes while converting avro schema to pydantic models manually. You can easily and automatically accomplish it by using `avsc_to_pydantic` function as demonstrated above."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "233f4a6b",
   "metadata": {},
   "source": [
    "#### Building pydantic models from `.avsc` file\n",
    "\n",
    "Not all cases will have avro schema conveniently defined as a python dictionary. You may have it stored as the proprietary `.avsc` files in filesystem. Let's see how to convert those `.avsc` files to pydantic models.\n",
    "\n",
    "Let's assume our avro files are stored in files called `iris_input_data_schema.avsc` and `iris_prediction_schema.avsc`. In that case, following code converts the schema to pydantic models:\n",
    "\n",
    "```python\n",
    "import json\n",
    "from fastkafka.encoder import avsc_to_pydantic\n",
    "\n",
    "\n",
    "with open(\"iris_input_data_schema.avsc\", \"rb\") as f:\n",
    "    iris_input_data_schema = json.load(f)\n",
    "    \n",
    "with open(\"iris_prediction_schema.avsc\", \"rb\") as f:\n",
    "    iris_prediction_schema = json.load(f)\n",
    "    \n",
    "\n",
    "IrisInputData = avsc_to_pydantic(iris_input_data_schema)\n",
    "print(IrisInputData.model_fields)\n",
    "\n",
    "IrisPrediction = avsc_to_pydantic(iris_prediction_schema)\n",
    "print(IrisPrediction.model_fields)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "191ab3f8",
   "metadata": {},
   "source": [
    "### Consume/Produce avro messages with FastKafka\n",
    "\n",
    "\n",
    "`FastKafka` provides `@consumes` and `@produces` methods to consume/produces messages to/from a `Kafka` topic. This is explained in [tutorial](/docs#function-decorators).\n",
    "\n",
    "The `@consumes` and `@produces` methods accepts a parameter called `decoder`/`encoder` to decode/encode avro messages.\n",
    "\n",
    "```python\n",
    "@kafka_app.consumes(topic=\"input_data\", encoder=\"avro\")\n",
    "async def on_input_data(msg: IrisInputData):\n",
    "    species_class = ml_models[\"iris_predictor\"].predict(\n",
    "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
    "    )[0]\n",
    "\n",
    "    await to_predictions(species_class)\n",
    "\n",
    "\n",
    "@kafka_app.produces(topic=\"predictions\", decoder=\"avro\")\n",
    "async def to_predictions(species_class: int) -> IrisPrediction:\n",
    "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
    "\n",
    "    prediction = IrisPrediction(species=iris_species[species_class])\n",
    "    return prediction\n",
    "```\n",
    "\n",
    "In the above example, in `@consumes` and `@produces` methods, we explicitly instruct FastKafka to `decode` and `encode` messages using the `avro` `decoder`/`encoder` instead of the default `json` `decoder`/`encoder`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44fed866",
   "metadata": {},
   "source": [
    "### Assembling it all together\n",
    "\n",
    "Let's rewrite the sample code found in [tutorial](/docs#running-the-service) to use `avro` to `decode` and `encode` messages:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4923af43",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "\n",
       "```python\n",
       "# content of the \"application.py\" file\n",
       "\n",
       "from contextlib import asynccontextmanager\n",
       "\n",
       "from sklearn.datasets import load_iris\n",
       "from sklearn.linear_model import LogisticRegression\n",
       "\n",
       "from fastkafka import FastKafka\n",
       "\n",
       "ml_models = {}\n",
       "\n",
       "\n",
       "@asynccontextmanager\n",
       "async def lifespan(app: FastKafka):\n",
       "    # Load the ML model\n",
       "    X, y = load_iris(return_X_y=True)\n",
       "    ml_models[\"iris_predictor\"] = LogisticRegression(random_state=0, max_iter=500).fit(\n",
       "        X, y\n",
       "    )\n",
       "    yield\n",
       "    # Clean up the ML models and release the resources\n",
       "    ml_models.clear()\n",
       "\n",
       "\n",
       "iris_input_data_schema = {\n",
       "    \"type\": \"record\",\n",
       "    \"namespace\": \"IrisInputData\",\n",
       "    \"name\": \"IrisInputData\",\n",
       "    \"fields\": [\n",
       "        {\"doc\": \"Sepal length in cm\", \"type\": \"double\", \"name\": \"sepal_length\"},\n",
       "        {\"doc\": \"Sepal width in cm\", \"type\": \"double\", \"name\": \"sepal_width\"},\n",
       "        {\"doc\": \"Petal length in cm\", \"type\": \"double\", \"name\": \"petal_length\"},\n",
       "        {\"doc\": \"Petal width in cm\", \"type\": \"double\", \"name\": \"petal_width\"},\n",
       "    ],\n",
       "}\n",
       "iris_prediction_schema = {\n",
       "    \"type\": \"record\",\n",
       "    \"namespace\": \"IrisPrediction\",\n",
       "    \"name\": \"IrisPrediction\",\n",
       "    \"fields\": [{\"doc\": \"Predicted species\", \"type\": \"string\", \"name\": \"species\"}],\n",
       "}\n",
       "# Or load schema from avsc files\n",
       "\n",
       "from fastkafka.encoder import avsc_to_pydantic\n",
       "\n",
       "IrisInputData = avsc_to_pydantic(iris_input_data_schema)\n",
       "IrisPrediction = avsc_to_pydantic(iris_prediction_schema)\n",
       "\n",
       "    \n",
       "from fastkafka import FastKafka\n",
       "\n",
       "kafka_brokers = {\n",
       "    \"localhost\": {\n",
       "        \"url\": \"localhost\",\n",
       "        \"description\": \"local development kafka broker\",\n",
       "        \"port\": 9092,\n",
       "    },\n",
       "    \"production\": {\n",
       "        \"url\": \"kafka.airt.ai\",\n",
       "        \"description\": \"production kafka broker\",\n",
       "        \"port\": 9092,\n",
       "        \"protocol\": \"kafka-secure\",\n",
       "        \"security\": {\"type\": \"plain\"},\n",
       "    },\n",
       "}\n",
       "\n",
       "kafka_app = FastKafka(\n",
       "    title=\"Iris predictions\",\n",
       "    kafka_brokers=kafka_brokers,\n",
       "    lifespan=lifespan,\n",
       ")\n",
       "\n",
       "@kafka_app.consumes(topic=\"input_data\", decoder=\"avro\")\n",
       "async def on_input_data(msg: IrisInputData):\n",
       "    species_class = ml_models[\"iris_predictor\"].predict(\n",
       "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
       "    )[0]\n",
       "\n",
       "    await to_predictions(species_class)\n",
       "\n",
       "\n",
       "@kafka_app.produces(topic=\"predictions\", encoder=\"avro\")\n",
       "async def to_predictions(species_class: int) -> IrisPrediction:\n",
       "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
       "\n",
       "    prediction = IrisPrediction(species=iris_species[species_class])\n",
       "    return prediction\n",
       "\n",
       "```\n"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# | echo: false\n",
    "\n",
    "kafka_app_source = \"\"\"\n",
    "from contextlib import asynccontextmanager\n",
    "\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "from fastkafka import FastKafka\n",
    "\n",
    "ml_models = {}\n",
    "\n",
    "\n",
    "@asynccontextmanager\n",
    "async def lifespan(app: FastKafka):\n",
    "    # Load the ML model\n",
    "    X, y = load_iris(return_X_y=True)\n",
    "    ml_models[\"iris_predictor\"] = LogisticRegression(random_state=0, max_iter=500).fit(\n",
    "        X, y\n",
    "    )\n",
    "    yield\n",
    "    # Clean up the ML models and release the resources\n",
    "    ml_models.clear()\n",
    "\n",
    "\n",
    "iris_input_data_schema = {\n",
    "    \"type\": \"record\",\n",
    "    \"namespace\": \"IrisInputData\",\n",
    "    \"name\": \"IrisInputData\",\n",
    "    \"fields\": [\n",
    "        {\"doc\": \"Sepal length in cm\", \"type\": \"double\", \"name\": \"sepal_length\"},\n",
    "        {\"doc\": \"Sepal width in cm\", \"type\": \"double\", \"name\": \"sepal_width\"},\n",
    "        {\"doc\": \"Petal length in cm\", \"type\": \"double\", \"name\": \"petal_length\"},\n",
    "        {\"doc\": \"Petal width in cm\", \"type\": \"double\", \"name\": \"petal_width\"},\n",
    "    ],\n",
    "}\n",
    "iris_prediction_schema = {\n",
    "    \"type\": \"record\",\n",
    "    \"namespace\": \"IrisPrediction\",\n",
    "    \"name\": \"IrisPrediction\",\n",
    "    \"fields\": [{\"doc\": \"Predicted species\", \"type\": \"string\", \"name\": \"species\"}],\n",
    "}\n",
    "# Or load schema from avsc files\n",
    "\n",
    "from fastkafka.encoder import avsc_to_pydantic\n",
    "\n",
    "IrisInputData = avsc_to_pydantic(iris_input_data_schema)\n",
    "IrisPrediction = avsc_to_pydantic(iris_prediction_schema)\n",
    "\n",
    "    \n",
    "from fastkafka import FastKafka\n",
    "\n",
    "kafka_brokers = {\n",
    "    \"localhost\": {\n",
    "        \"url\": \"localhost\",\n",
    "        \"description\": \"local development kafka broker\",\n",
    "        \"port\": 9092,\n",
    "    },\n",
    "    \"production\": {\n",
    "        \"url\": \"kafka.airt.ai\",\n",
    "        \"description\": \"production kafka broker\",\n",
    "        \"port\": 9092,\n",
    "        \"protocol\": \"kafka-secure\",\n",
    "        \"security\": {\"type\": \"plain\"},\n",
    "    },\n",
    "}\n",
    "\n",
    "kafka_app = FastKafka(\n",
    "    title=\"Iris predictions\",\n",
    "    kafka_brokers=kafka_brokers,\n",
    "    lifespan=lifespan,\n",
    ")\n",
    "\n",
    "@kafka_app.consumes(topic=\"input_data\", decoder=\"avro\")\n",
    "async def on_input_data(msg: IrisInputData):\n",
    "    species_class = ml_models[\"iris_predictor\"].predict(\n",
    "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
    "    )[0]\n",
    "\n",
    "    await to_predictions(species_class)\n",
    "\n",
    "\n",
    "@kafka_app.produces(topic=\"predictions\", encoder=\"avro\")\n",
    "async def to_predictions(species_class: int) -> IrisPrediction:\n",
    "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
    "\n",
    "    prediction = IrisPrediction(species=iris_species[species_class])\n",
    "    return prediction\n",
    "\"\"\"\n",
    "\n",
    "\n",
    "Markdown(\n",
    "    f\"\"\"\n",
    "```python\n",
    "# content of the \"application.py\" file\n",
    "{kafka_app_source}\n",
    "```\n",
    "\"\"\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9bd7e2c0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "23-07-05 08:19:27.787 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker._patch_consumers_and_producers(): Patching consumers and producers!\n",
      "23-07-05 08:19:27.788 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker starting\n",
      "23-07-05 08:19:27.799 [INFO] fastkafka._application.app: _create_producer() : created producer using the config: '{'bootstrap_servers': 'localhost:9092'}'\n",
      "23-07-05 08:19:27.800 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched start() called()\n",
      "23-07-05 08:19:27.809 [INFO] fastkafka._application.app: _create_producer() : created producer using the config: '{'bootstrap_servers': 'localhost:9092'}'\n",
      "23-07-05 08:19:27.810 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched start() called()\n",
      "23-07-05 08:19:27.810 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() starting...\n",
      "23-07-05 08:19:27.811 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer created using the following parameters: {'auto_offset_reset': 'earliest', 'max_poll_records': 100, 'bootstrap_servers': 'localhost:9092'}\n",
      "23-07-05 08:19:27.811 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched start() called()\n",
      "23-07-05 08:19:27.811 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer started.\n",
      "23-07-05 08:19:27.812 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched subscribe() called\n",
      "23-07-05 08:19:27.812 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer.subscribe(), subscribing to: ['input_data']\n",
      "23-07-05 08:19:27.812 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer subscribed.\n",
      "23-07-05 08:19:27.812 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() starting...\n",
      "23-07-05 08:19:27.813 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer created using the following parameters: {'auto_offset_reset': 'earliest', 'max_poll_records': 100, 'bootstrap_servers': 'localhost:9092'}\n",
      "23-07-05 08:19:27.813 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched start() called()\n",
      "23-07-05 08:19:27.813 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer started.\n",
      "23-07-05 08:19:27.814 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched subscribe() called\n",
      "23-07-05 08:19:27.814 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer.subscribe(), subscribing to: ['predictions']\n",
      "23-07-05 08:19:27.814 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer subscribed.\n",
      "23-07-05 08:19:31.811 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched stop() called\n",
      "23-07-05 08:19:31.812 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer stopped.\n",
      "23-07-05 08:19:31.812 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() finished.\n",
      "23-07-05 08:19:31.812 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched stop() called\n",
      "23-07-05 08:19:31.813 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched stop() called\n",
      "23-07-05 08:19:31.813 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer stopped.\n",
      "23-07-05 08:19:31.814 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() finished.\n",
      "23-07-05 08:19:31.814 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched stop() called\n",
      "23-07-05 08:19:31.814 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker stopping\n"
     ]
    }
   ],
   "source": [
    "# | hide\n",
    "\n",
    "with TemporaryDirectory() as d:\n",
    "    src_path = Path(d) / \"application.py\"\n",
    "    with open(src_path, \"w\") as source:\n",
    "        source.write(kafka_app_source)\n",
    "    with change_dir(d):\n",
    "        sys.path.insert(0, d)\n",
    "        from application import kafka_app, IrisInputData, IrisPrediction\n",
    "\n",
    "        from fastkafka.testing import Tester\n",
    "\n",
    "        msg = IrisInputData(\n",
    "            sepal_length=0.1,\n",
    "            sepal_width=0.2,\n",
    "            petal_length=0.3,\n",
    "            petal_width=0.4,\n",
    "        )\n",
    "\n",
    "        # Start Tester app and create InMemory Kafka broker for testing\n",
    "        async with Tester(kafka_app) as tester:\n",
    "            # Send IrisInputData message to input_data topic\n",
    "            await tester.to_input_data(msg)\n",
    "\n",
    "            # Assert that the kafka_app responded with IrisPrediction in predictions topic\n",
    "            await tester.awaited_mocks.on_predictions.assert_awaited_with(\n",
    "                IrisPrediction(species=\"setosa\"), timeout=3\n",
    "            )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11596544",
   "metadata": {},
   "source": [
    "The above code is a sample implementation of using FastKafka to consume and produce Avro-encoded messages from/to a Kafka topic. The code defines two Avro schemas for the input data and the prediction result. It then uses the `avsc_to_pydantic` function from the FastKafka library to convert the Avro schema into Pydantic models, which will be used to decode and encode Avro messages.\n",
    "\n",
    "The `FastKafka` class is then instantiated with the broker details, and two functions decorated with `@kafka_app.consumes` and `@kafka_app.produces` are defined to consume messages from the \"input_data\" topic and produce messages to the \"predictions\" topic, respectively. The functions uses the decoder=\"avro\" and encoder=\"avro\" parameters to decode and encode the Avro messages.\n",
    "\n",
    "In summary, the above code demonstrates a straightforward way to use Avro-encoded messages with FastKafka to build a message processing pipeline."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c45a7f2",
   "metadata": {},
   "source": [
    "## 3. Custom encoder and decoder\n",
    "\n",
    "If you are not happy with the json or avro encoder/decoder options, you can write your own encoder/decoder functions and use them to encode/decode Pydantic messages."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "afde8fe2",
   "metadata": {},
   "source": [
    "### Writing a custom encoder and decoder\n",
    "\n",
    "In this section, let's see how to write a custom encoder and decoder which obfuscates kafka message with simple [ROT13](https://en.wikipedia.org/wiki/ROT13) cipher.\n",
    "\n",
    "```python\n",
    "import codecs\n",
    "import json\n",
    "from typing import Any, Type\n",
    "\n",
    "\n",
    "def custom_encoder(msg: BaseModel) -> bytes:\n",
    "    msg_str = msg.json()\n",
    "    obfuscated = codecs.encode(msg_str, 'rot13')\n",
    "    raw_bytes = obfuscated.encode(\"utf-8\")\n",
    "    return raw_bytes\n",
    "\n",
    "def custom_decoder(raw_msg: bytes, cls: Type[BaseModel]) -> Any:\n",
    "    obfuscated = raw_msg.decode(\"utf-8\")\n",
    "    msg_str = codecs.decode(obfuscated, 'rot13')\n",
    "    msg_dict = json.loads(msg_str)\n",
    "    return cls(**msg_dict)\n",
    "```\n",
    "\n",
    "The above code defines two custom functions for encoding and decoding messages in a Kafka application using the FastKafka library. \n",
    "\n",
    "The encoding function, `custom_encoder()`, takes a message `msg` which is an instance of a Pydantic model, converts it to a JSON string using the `json()` method, obfuscates the resulting string using the ROT13 algorithm from the `codecs` module, and finally encodes the obfuscated string as raw bytes using the UTF-8 encoding. \n",
    "\n",
    "The decoding function, `custom_decoder()`, takes a raw message `raw_msg` in bytes format, a Pydantic class to construct instance with cls parameter. It first decodes the raw message from UTF-8 encoding, then uses the ROT13 algorithm to de-obfuscate the string. Finally, it loads the resulting JSON string using the `json.loads()` method and returns a new instance of the specified `cls` class initialized with the decoded dictionary. \n",
    "\n",
    "These functions can be used with FastKafka's `encoder` and `decoder` parameters to customize the serialization and deserialization of messages in Kafka topics.\n",
    "\n",
    "\n",
    "Let's test the above code\n",
    "\n",
    "```python\n",
    "i = IrisInputData(sepal_length=0.5, sepal_width=0.5, petal_length=0.5, petal_width=0.5)\n",
    "\n",
    "encoded = custom_encoder(i)\n",
    "display(encoded)\n",
    "\n",
    "decoded = custom_decoder(encoded, IrisInputData)\n",
    "display(decoded)\n",
    "```\n",
    "\n",
    "This will result in following output\n",
    "\n",
    "```txt\n",
    "b'{\"frcny_yratgu\": 0.5, \"frcny_jvqgu\": 0.5, \"crgny_yratgu\": 0.5, \"crgny_jvqgu\": 0.5}'\n",
    "\n",
    "IrisInputData(sepal_length=0.5, sepal_width=0.5, petal_length=0.5, petal_width=0.5)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20fe6ff6",
   "metadata": {},
   "source": [
    "### Assembling it all together\n",
    "\n",
    "Let's rewrite the sample code found in [tutorial](/docs#running-the-service) to use our custom decoder and encoder functions:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "215d6e0e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "\n",
       "```python\n",
       "# content of the \"application.py\" file\n",
       "\n",
       "from contextlib import asynccontextmanager\n",
       "\n",
       "from sklearn.datasets import load_iris\n",
       "from sklearn.linear_model import LogisticRegression\n",
       "\n",
       "from fastkafka import FastKafka\n",
       "\n",
       "ml_models = {}\n",
       "\n",
       "\n",
       "@asynccontextmanager\n",
       "async def lifespan(app: FastKafka):\n",
       "    # Load the ML model\n",
       "    X, y = load_iris(return_X_y=True)\n",
       "    ml_models[\"iris_predictor\"] = LogisticRegression(random_state=0, max_iter=500).fit(\n",
       "        X, y\n",
       "    )\n",
       "    yield\n",
       "    # Clean up the ML models and release the resources\n",
       "    ml_models.clear()\n",
       "\n",
       "\n",
       "from pydantic import BaseModel, NonNegativeFloat, Field\n",
       "\n",
       "class IrisInputData(BaseModel):\n",
       "    sepal_length: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Sepal length in cm\"\n",
       "    )\n",
       "    sepal_width: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Sepal width in cm\"\n",
       "    )\n",
       "    petal_length: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Petal length in cm\"\n",
       "    )\n",
       "    petal_width: NonNegativeFloat = Field(\n",
       "        ..., example=0.5, description=\"Petal width in cm\"\n",
       "    )\n",
       "\n",
       "\n",
       "class IrisPrediction(BaseModel):\n",
       "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")\n",
       "\n",
       "\n",
       "import codecs\n",
       "import json\n",
       "from typing import Any, Type\n",
       "\n",
       "\n",
       "def custom_encoder(msg: BaseModel) -> bytes:\n",
       "    msg_str = msg.json()\n",
       "    obfuscated = codecs.encode(msg_str, 'rot13')\n",
       "    raw_bytes = obfuscated.encode(\"utf-8\")\n",
       "    return raw_bytes\n",
       "\n",
       "def custom_decoder(raw_msg: bytes, cls: Type[BaseModel]) -> Any:\n",
       "    obfuscated = raw_msg.decode(\"utf-8\")\n",
       "    msg_str = codecs.decode(obfuscated, 'rot13')\n",
       "    msg_dict = json.loads(msg_str)\n",
       "    return cls(**msg_dict)\n",
       "\n",
       "    \n",
       "from fastkafka import FastKafka\n",
       "\n",
       "kafka_brokers = {\n",
       "    \"localhost\": {\n",
       "        \"url\": \"localhost\",\n",
       "        \"description\": \"local development kafka broker\",\n",
       "        \"port\": 9092,\n",
       "    },\n",
       "    \"production\": {\n",
       "        \"url\": \"kafka.airt.ai\",\n",
       "        \"description\": \"production kafka broker\",\n",
       "        \"port\": 9092,\n",
       "        \"protocol\": \"kafka-secure\",\n",
       "        \"security\": {\"type\": \"plain\"},\n",
       "    },\n",
       "}\n",
       "\n",
       "kafka_app = FastKafka(\n",
       "    title=\"Iris predictions\",\n",
       "    kafka_brokers=kafka_brokers,\n",
       "    lifespan=lifespan,\n",
       ")\n",
       "\n",
       "@kafka_app.consumes(topic=\"input_data\", decoder=custom_decoder)\n",
       "async def on_input_data(msg: IrisInputData):\n",
       "    species_class = ml_models[\"iris_predictor\"].predict(\n",
       "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
       "    )[0]\n",
       "\n",
       "    await to_predictions(species_class)\n",
       "\n",
       "\n",
       "@kafka_app.produces(topic=\"predictions\", encoder=custom_encoder)\n",
       "async def to_predictions(species_class: int) -> IrisPrediction:\n",
       "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
       "\n",
       "    prediction = IrisPrediction(species=iris_species[species_class])\n",
       "    return prediction\n",
       "\n",
       "```\n"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# | echo: false\n",
    "\n",
    "kafka_app_source = \"\"\"\n",
    "from contextlib import asynccontextmanager\n",
    "\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "from fastkafka import FastKafka\n",
    "\n",
    "ml_models = {}\n",
    "\n",
    "\n",
    "@asynccontextmanager\n",
    "async def lifespan(app: FastKafka):\n",
    "    # Load the ML model\n",
    "    X, y = load_iris(return_X_y=True)\n",
    "    ml_models[\"iris_predictor\"] = LogisticRegression(random_state=0, max_iter=500).fit(\n",
    "        X, y\n",
    "    )\n",
    "    yield\n",
    "    # Clean up the ML models and release the resources\n",
    "    ml_models.clear()\n",
    "\n",
    "\n",
    "from pydantic import BaseModel, NonNegativeFloat, Field\n",
    "\n",
    "class IrisInputData(BaseModel):\n",
    "    sepal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal length in cm\"\n",
    "    )\n",
    "    sepal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Sepal width in cm\"\n",
    "    )\n",
    "    petal_length: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal length in cm\"\n",
    "    )\n",
    "    petal_width: NonNegativeFloat = Field(\n",
    "        ..., example=0.5, description=\"Petal width in cm\"\n",
    "    )\n",
    "\n",
    "\n",
    "class IrisPrediction(BaseModel):\n",
    "    species: str = Field(..., example=\"setosa\", description=\"Predicted species\")\n",
    "\n",
    "\n",
    "import codecs\n",
    "import json\n",
    "from typing import Any, Type\n",
    "\n",
    "\n",
    "def custom_encoder(msg: BaseModel) -> bytes:\n",
    "    msg_str = msg.json()\n",
    "    obfuscated = codecs.encode(msg_str, 'rot13')\n",
    "    raw_bytes = obfuscated.encode(\"utf-8\")\n",
    "    return raw_bytes\n",
    "\n",
    "def custom_decoder(raw_msg: bytes, cls: Type[BaseModel]) -> Any:\n",
    "    obfuscated = raw_msg.decode(\"utf-8\")\n",
    "    msg_str = codecs.decode(obfuscated, 'rot13')\n",
    "    msg_dict = json.loads(msg_str)\n",
    "    return cls(**msg_dict)\n",
    "\n",
    "    \n",
    "from fastkafka import FastKafka\n",
    "\n",
    "kafka_brokers = {\n",
    "    \"localhost\": {\n",
    "        \"url\": \"localhost\",\n",
    "        \"description\": \"local development kafka broker\",\n",
    "        \"port\": 9092,\n",
    "    },\n",
    "    \"production\": {\n",
    "        \"url\": \"kafka.airt.ai\",\n",
    "        \"description\": \"production kafka broker\",\n",
    "        \"port\": 9092,\n",
    "        \"protocol\": \"kafka-secure\",\n",
    "        \"security\": {\"type\": \"plain\"},\n",
    "    },\n",
    "}\n",
    "\n",
    "kafka_app = FastKafka(\n",
    "    title=\"Iris predictions\",\n",
    "    kafka_brokers=kafka_brokers,\n",
    "    lifespan=lifespan,\n",
    ")\n",
    "\n",
    "@kafka_app.consumes(topic=\"input_data\", decoder=custom_decoder)\n",
    "async def on_input_data(msg: IrisInputData):\n",
    "    species_class = ml_models[\"iris_predictor\"].predict(\n",
    "        [[msg.sepal_length, msg.sepal_width, msg.petal_length, msg.petal_width]]\n",
    "    )[0]\n",
    "\n",
    "    await to_predictions(species_class)\n",
    "\n",
    "\n",
    "@kafka_app.produces(topic=\"predictions\", encoder=custom_encoder)\n",
    "async def to_predictions(species_class: int) -> IrisPrediction:\n",
    "    iris_species = [\"setosa\", \"versicolor\", \"virginica\"]\n",
    "\n",
    "    prediction = IrisPrediction(species=iris_species[species_class])\n",
    "    return prediction\n",
    "\"\"\"\n",
    "\n",
    "\n",
    "\n",
    "Markdown(\n",
    "    f\"\"\"\n",
    "```python\n",
    "# content of the \"application.py\" file\n",
    "{kafka_app_source}\n",
    "```\n",
    "\"\"\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "94623894",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "23-07-05 08:19:31.838 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker._patch_consumers_and_producers(): Patching consumers and producers!\n",
      "23-07-05 08:19:31.838 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker starting\n",
      "23-07-05 08:19:31.853 [INFO] fastkafka._application.app: _create_producer() : created producer using the config: '{'bootstrap_servers': 'localhost:9092'}'\n",
      "23-07-05 08:19:31.853 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched start() called()\n",
      "23-07-05 08:19:31.867 [INFO] fastkafka._application.app: _create_producer() : created producer using the config: '{'bootstrap_servers': 'localhost:9092'}'\n",
      "23-07-05 08:19:31.867 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched start() called()\n",
      "23-07-05 08:19:31.868 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() starting...\n",
      "23-07-05 08:19:31.868 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer created using the following parameters: {'auto_offset_reset': 'earliest', 'max_poll_records': 100, 'bootstrap_servers': 'localhost:9092'}\n",
      "23-07-05 08:19:31.869 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched start() called()\n",
      "23-07-05 08:19:31.869 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer started.\n",
      "23-07-05 08:19:31.869 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched subscribe() called\n",
      "23-07-05 08:19:31.869 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer.subscribe(), subscribing to: ['input_data']\n",
      "23-07-05 08:19:31.870 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer subscribed.\n",
      "23-07-05 08:19:31.870 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() starting...\n",
      "23-07-05 08:19:31.870 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer created using the following parameters: {'auto_offset_reset': 'earliest', 'max_poll_records': 100, 'bootstrap_servers': 'localhost:9092'}\n",
      "23-07-05 08:19:31.870 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched start() called()\n",
      "23-07-05 08:19:31.871 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer started.\n",
      "23-07-05 08:19:31.871 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched subscribe() called\n",
      "23-07-05 08:19:31.871 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer.subscribe(), subscribing to: ['predictions']\n",
      "23-07-05 08:19:31.871 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer subscribed.\n",
      "23-07-05 08:19:35.868 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched stop() called\n",
      "23-07-05 08:19:35.869 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer stopped.\n",
      "23-07-05 08:19:35.869 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() finished.\n",
      "23-07-05 08:19:35.870 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched stop() called\n",
      "23-07-05 08:19:35.870 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaConsumer patched stop() called\n",
      "23-07-05 08:19:35.870 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop(): Consumer stopped.\n",
      "23-07-05 08:19:35.870 [INFO] fastkafka._components.aiokafka_consumer_loop: aiokafka_consumer_loop() finished.\n",
      "23-07-05 08:19:35.871 [INFO] fastkafka._testing.in_memory_broker: AIOKafkaProducer patched stop() called\n",
      "23-07-05 08:19:35.871 [INFO] fastkafka._testing.in_memory_broker: InMemoryBroker stopping\n"
     ]
    }
   ],
   "source": [
    "# | hide\n",
    "\n",
    "with TemporaryDirectory() as d:\n",
    "    src_path = Path(d) / \"application.py\"\n",
    "    with open(src_path, \"w\") as source:\n",
    "        source.write(kafka_app_source)\n",
    "    with change_dir(d):\n",
    "        sys.path.insert(0, d)\n",
    "        from application import kafka_app, IrisInputData, IrisPrediction\n",
    "\n",
    "        from fastkafka.testing import Tester\n",
    "\n",
    "        msg = IrisInputData(\n",
    "            sepal_length=0.1,\n",
    "            sepal_width=0.2,\n",
    "            petal_length=0.3,\n",
    "            petal_width=0.4,\n",
    "        )\n",
    "\n",
    "        # Start Tester app and create InMemory Kafka broker for testing\n",
    "        async with Tester(kafka_app) as tester:\n",
    "            # Send IrisInputData message to input_data topic\n",
    "            await tester.to_input_data(msg)\n",
    "\n",
    "            # Assert that the kafka_app responded with IrisPrediction in predictions topic\n",
    "            await tester.awaited_mocks.on_predictions.assert_awaited_with(\n",
    "                IrisPrediction(species=\"setosa\"), timeout=3\n",
    "            )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07d7dfde",
   "metadata": {},
   "source": [
    "This code defines a custom encoder and decoder functions for encoding and decoding messages sent through a Kafka messaging system. \n",
    "\n",
    "The custom `encoder` function takes a message represented as a `BaseModel` and encodes it as bytes by first converting it to a JSON string and then obfuscating it using the ROT13 encoding. The obfuscated message is then converted to bytes using UTF-8 encoding and returned.\n",
    "\n",
    "The custom `decoder` function takes in the bytes representing an obfuscated message, decodes it using UTF-8 encoding, then decodes the ROT13 obfuscation, and finally loads it as a dictionary using the `json` module. This dictionary is then converted to a `BaseModel` instance using the cls parameter."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "python3",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
