{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-colab"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/pathwaycom/pathway/blob/main/examples/notebooks/tutorials/declarative_vs_imperative.ipynb\" target=\"_parent\"><img src=\"https://pathway.com/assets/colab-badge.svg\" alt=\"Run In Colab\" class=\"inline\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "notebook-instructions",
      "source": [
        "# Installing Pathway with Python 3.10+\n",
        "\n",
        "In the cell below, we install Pathway into a Python 3.10+ Linux runtime.\n",
        "\n",
        "> **If you are running in Google Colab, please run the colab notebook (Ctrl+F9)**, disregarding the 'not authored by Google' warning.\n",
        "> \n",
        "> **The installation and loading time is less than 1 minute**.\n"
      ],
      "metadata": {}
    },
    {
      "cell_type": "code",
      "id": "pip-installation-pathway",
      "source": [
        "%%capture --no-display\n",
        "!pip install --prefer-binary pathway"
      ],
      "execution_count": null,
      "outputs": [],
      "metadata": {}
    },
    {
      "cell_type": "code",
      "id": "logging",
      "source": [
        "import logging\n",
        "\n",
        "logging.basicConfig(level=logging.CRITICAL)"
      ],
      "execution_count": null,
      "outputs": [],
      "metadata": {}
    },
    {
      "cell_type": "markdown",
      "id": "2",
      "metadata": {},
      "source": [
        "# Writing declarative over imperative pipelines\n",
        "\n",
        "Many real-world data processing tasks \u2014 such as those in logistics, supply chain management, or event stream analysis\u2014rely on the order of events to extract meaningful insights. For example, tracking shipments, monitoring sensor data, or processing sequences of user actions all require careful handling of ordered streams. Pathway's declarative approach makes it easy to express such order-dependent logic, enabling robust solutions for these domains.\n",
        "\n",
        "In data processing, imperative pipelines require you to specify step-by-step instructions for how data should be transformed, often leading to complex and less maintainable code. Declarative pipelines, on the other hand, focus on describing what the desired outcome is, letting the system determine the best way to achieve it. Pathway encourages a declarative approach, as demonstrated in the examples below, allowing you to express complex data transformations and iterative computations in a concise and readable manner. This leads to more robust, scalable, and maintainable workflows."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "3",
      "metadata": {},
      "source": [
        "## Splitting ordered stream into chunks\n",
        "\n",
        "The following simple example demonstrates how to split an ordered stream of events into chunks based on a flag column, i.e. for the input:\n",
        "```\n",
        "    event_time | flag\n",
        "             0 | True\n",
        "             1 | False\n",
        "             2 | False\n",
        "             3 | True\n",
        "             4 | False\n",
        "             5 | False\n",
        "             6 | False\n",
        "             7 | True\n",
        "             8 | False\n",
        "             9 | True\n",
        "```\n",
        "we would expect three \"finished\" chunks: `(0,1,2)`, `(3,4,5,6)`, `(7,8)` and one unfinished chunk `(9,...)`.\n",
        "\n",
        "One way to do this would be imperative style: go through rows one-by-one in order storing current chunk in a state and emiting it whenever `flag` is equal to True, while clearing the state.\n",
        "Even though, its not recommended approach, let's see how to code it in Pathway."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "            | chunk        | __time__ | __diff__\n",
            "^FS5XW68... | (0, 1, 2)    | 8        | 1\n",
            "^EBRS223... | (3, 4, 5, 6) | 16       | 1\n",
            "^X83A19D... | (7, 8)       | 20       | 1\n"
          ]
        }
      ],
      "source": [
        "import pathway as pw\n",
        "\n",
        "# To use advanced features with Pathway Scale, get your free license key from\n",
        "# https://pathway.com/features and paste it below.\n",
        "# To use Pathway Community, comment out the line below.\n",
        "pw.set_license_key(\"demo-license-key-with-telemetry\")\n",
        "\n",
        "\n",
        "t = pw.debug.table_from_markdown(\n",
        "    \"\"\"\n",
        "    event_time | flag  | __time__\n",
        "             0 | True  | 2\n",
        "             1 | False | 4\n",
        "             2 | False | 6\n",
        "             3 | True  | 8\n",
        "             4 | False | 10\n",
        "             5 | False | 12\n",
        "             6 | False | 14\n",
        "             7 | True  | 16\n",
        "             8 | False | 18\n",
        "             9 | True  | 20\n",
        "    \"\"\"\n",
        ")\n",
        "\n",
        "\n",
        "def split_by_flag_imperative(input_t: pw.Table) -> pw.Table:\n",
        "    @pw.reducers.stateful_many  # type: ignore\n",
        "    def split_by_flag(\n",
        "        state: tuple[tuple[int], tuple[tuple[int]]] | None,\n",
        "        rows: list[tuple[list[int], int]],\n",
        "    ) -> int:\n",
        "        if state is None:\n",
        "            state = [[], []]\n",
        "        else:\n",
        "            state = [(state[0]), []]  # clear emitted rows\n",
        "        current_chunk, chunks_to_emit = state\n",
        "        rows = sorted(rows, key=lambda x: x[0][0])  # sort batch by event_time\n",
        "        # imperative logic for splitting into chunks\n",
        "        for row, _ in rows:\n",
        "            event_time, flag = row\n",
        "\n",
        "            if flag and len(current_chunk) > 0:\n",
        "                chunks_to_emit = list(chunks_to_emit)\n",
        "                chunks_to_emit.append(current_chunk)\n",
        "                current_chunk = [event_time]\n",
        "                state = (current_chunk, chunks_to_emit)\n",
        "            else:\n",
        "                current_chunk = list(current_chunk)\n",
        "                current_chunk.append(event_time)\n",
        "                state = (current_chunk, chunks_to_emit)\n",
        "        return state\n",
        "\n",
        "    res = (\n",
        "        input_t.groupby()\n",
        "        .reduce(coupled=split_by_flag(pw.this.event_time, pw.this.flag))\n",
        "        .select(chunk=pw.this.coupled[1])\n",
        "    )\n",
        "    return res.flatten(pw.this.chunk)._remove_retractions().with_id_from(pw.this.chunk)\n",
        "\n",
        "\n",
        "pw.debug.compute_and_print_update_stream(split_by_flag_imperative(t))"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5",
      "metadata": {},
      "source": [
        ""
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6",
      "metadata": {},
      "source": [
        "Instead of manually managing state and control flow, Pathway allows you to define such logic using declarative constructs like `sort`, `iterate`, `groupby`. The result is a clear and concise pipeline that emits chunks of event times splitting the flag, showcasing the power and readability of declarative data processing.\n",
        "\n",
        "In the following, we tell Pathway to propagate the starting time of each chunk across the rows. This is done by declaring a simple local rule: take the starting time of a chunk from previous row or use current event time. This rule is then iterated until fixed-point, so that the information is spreaded until all rows know the starting time of their chunk.\n",
        "\n",
        "Then we can just group rows by starting time of the chunk to get a table of chunks."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "7",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "            | chunk\n",
            "^FS5XW68... | (0, 1, 2)\n",
            "^EBRS223... | (3, 4, 5, 6)\n",
            "^X83A19D... | (7, 8)\n",
            "^TS1H0Z0... | (9,)\n"
          ]
        }
      ],
      "source": [
        "import pathway as pw\n",
        "\n",
        "\n",
        "def split_by_flag_declarative(input_t: pw.Table) -> pw.Table:\n",
        "    t_sorted = input_t + input_t.sort(input_t.event_time)\n",
        "\n",
        "    def _step(tab: pw.Table) -> pw.Table:\n",
        "        tab_prev = tab.ix(tab.prev, optional=True)\n",
        "        return tab.with_columns(\n",
        "            prev_flag_time=pw.coalesce(tab.prev_flag_time, tab_prev.prev_flag_time)\n",
        "        )\n",
        "\n",
        "    res = pw.iterate(\n",
        "        _step,\n",
        "        tab=t_sorted.with_columns(\n",
        "            prev_flag_time=pw.if_else(pw.this.flag, pw.this.event_time, None)\n",
        "        ),\n",
        "    ).without(pw.this.prev, pw.this.next)\n",
        "    res = (\n",
        "        res.groupby(pw.this.prev_flag_time)\n",
        "        .reduce(chunk=pw.reducers.sorted_tuple(pw.this.event_time))\n",
        "        .with_id_from(pw.this.chunk)\n",
        "    )\n",
        "    return res\n",
        "\n",
        "\n",
        "pw.debug.compute_and_print(split_by_flag_declarative(t))"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8",
      "metadata": {},
      "source": [
        "To illustrate the advantages of declarative solution over imperative one, consider situation which occurs very often in the real world - when input events arrive out of order.\n",
        "The declarative solution works without any changes:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "9",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "            | chunk\n",
            "^FS5XW68... | (0, 1, 2)\n",
            "^EBRS223... | (3, 4, 5, 6)\n",
            "^X83A19D... | (7, 8)\n",
            "^TS1H0Z0... | (9,)\n"
          ]
        }
      ],
      "source": [
        "t_out_of_order = pw.debug.table_from_markdown(\n",
        "    \"\"\"\n",
        "    event_time | flag  | __time__\n",
        "             0 | True  | 2\n",
        "             1 | False | 4\n",
        "             2 | False | 6\n",
        "             3 | True  | 8\n",
        "             4 | False | 20\n",
        "             5 | False | 18\n",
        "             6 | False | 16\n",
        "             7 | True  | 14\n",
        "             8 | False | 12\n",
        "             9 | True  | 10\n",
        "    \"\"\"\n",
        ")\n",
        "\n",
        "pw.debug.compute_and_print(split_by_flag_declarative(t_out_of_order))"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "10",
      "metadata": {},
      "source": [
        "while the imperative one fails to produce correct results:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "11",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "            | chunk\n",
            "^FS5XW68... | (0, 1, 2)\n",
            "^NJSF65R... | (3,)\n",
            "^GDJQK9T... | (9, 8)\n"
          ]
        }
      ],
      "source": [
        "pw.debug.compute_and_print(split_by_flag_imperative(t_out_of_order))"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "12",
      "metadata": {},
      "source": [
        "To fix this, one would have to adapt the imperative code by implementing additional buffering, handling insertion of rows at correct place in the chunk and delaying emitting results until one's sure that no earlier events will arrive. This would significantly complicate the code.\n",
        "\n",
        "Pathway's declarative approach on the other hand keeps the code clean, simple and maintainable."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "13",
      "metadata": {},
      "source": [
        "### Managing memory with forgetting\n",
        "In the example above, the declarative pipeline keeps accumulating state as more data arrives.\n",
        "Specifically, the sort, groupby, iterate, ix operators are stateful.\n",
        "\n",
        "For large streaming data, we would want to keep the memory usage bounded.\n",
        "Pathway provides a forgetting mechanism that can be used to limit the amount of state kept by these operators.\n",
        "The solution would be to wrap the input table with `_forget` method, specifying the window of interest."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "14",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "            | chunk\n",
            "^FS5XW68... | (0, 1, 2)\n",
            "^EBRS223... | (3, 4, 5, 6)\n",
            "^X83A19D... | (7, 8)\n",
            "^TS1H0Z0... | (9,)\n"
          ]
        }
      ],
      "source": [
        "WINDOW = 7  # set it to appropriate value for your use-case: here max(chunk size, out-of-orderness)\n",
        "\n",
        "t_out_of_order_tail = t_out_of_order.forget(\n",
        "    pw.this.event_time, WINDOW, mark_forgetting_records=True\n",
        ")  # emit special \"-1\"'s for older entries to clean up obsolete state\n",
        "result = split_by_flag_declarative(\n",
        "    t_out_of_order_tail\n",
        ")  # transformation wrapped around forgetting\n",
        "result = (\n",
        "    result.filter_out_results_of_forgetting()\n",
        ")  # clean-up entries emitted due to forgetting to get a stream containing all final results\n",
        "pw.debug.compute_and_print(result)"
      ]
    }
  ],
  "metadata": {
    "jupytext": {
      "cell_metadata_filter": "-all",
      "main_language": "python",
      "notebook_metadata_filter": "-all"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.11.14"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}