{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uthRyjhO-SAy"
   },
   "source": [
    "# Imports"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "QhawAa3gmYuX"
   },
   "source": [
    "Import the required components:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "CIkhvpy2mP7M"
   },
   "outputs": [],
   "source": [
    "from typing import List, Optional, Union\n",
    "import dataclasses\n",
    "import pandas as pd\n",
    "\n",
    "from evidently.base_metric import InputData\n",
    "from evidently.base_metric import Metric\n",
    "from evidently.base_metric import MetricResult\n",
    "from evidently.model.widget import BaseWidgetInfo\n",
    "from evidently.renderers.base_renderer import MetricRenderer\n",
    "from evidently.renderers.base_renderer import default_renderer\n",
    "from evidently.renderers.html_widgets import CounterData\n",
    "from evidently.renderers.html_widgets import header_text\n",
    "from evidently.renderers.html_widgets import plotly_figure"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "H-bXHuc-4OmI"
   },
   "source": [
    "# Understand the architecture"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2hzP8JSu4SwV"
   },
   "source": [
    "The `metric` is a key component of Evidently. Each `test` uses a metric for calculations. If you want to create a test, you need to create a metric first. Both tests and metrics have `renders` which might look differently. If you are creating metric or test for your internal use, you might skip some steps: e.g., do not create a sophisticated visualization if you do not need it."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "dmZwrhS65fli"
   },
   "source": [
    "![architecture_metric-min.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "siGN0Wmb-XID"
   },
   "source": [
    "# Create a new metric"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qpBe_6TimcPg"
   },
   "source": [
    "Let's imagine you want to create a metric that sums up all the values in a column.\n",
    "\n",
    "First, you need to define the resulting `dataclass`.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "KEPvuJ_Pm-0x"
   },
   "outputs": [],
   "source": [
    "class MyMetricResult(MetricResult):\n",
    "    sum_value: float"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "TWdAzLqdnLA3"
   },
   "source": [
    "Then, you need to create the class of the `metric` itself. It should have the `calculate` method which takes `InputData` - a class that contains reference dataset, current dataset and column mapping.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "7EEqXnpKnKG9"
   },
   "outputs": [],
   "source": [
    "class MyMetric(Metric[MyMetricResult]):\n",
    "  column_name: str\n",
    "\n",
    "  def __init__(self, column_name: str):\n",
    "    self.column_name = column_name\n",
    "    super().__init__()\n",
    "\n",
    "  def calculate(self, data: InputData) -> MyMetricResult:\n",
    "    metric_value = data.current_data[self.column_name].sum()\n",
    "    return MyMetricResult(\n",
    "        sum_value = metric_value\n",
    "    )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tyiroJJv-daK"
   },
   "source": [
    "# Define the metric render"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4cwo1d3doY4Q"
   },
   "source": [
    "Next, you need to define the way the metric will look in the HTML reports or JSON export. Let's make a `render` class!\n",
    "\n",
    "**Note:** HTML render is optional. You can skip it if you do not plan to use the metric independently, and only want to call it as part of the test.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "t6oIlKwUonbC"
   },
   "outputs": [],
   "source": [
    "@default_renderer(wrap_type=MyMetric)\n",
    "class MyMetricRenderer(MetricRenderer):\n",
    "    def render_json(self, obj: MyMetric) -> dict:\n",
    "        result = dataclasses.asdict(obj.get_result())\n",
    "        return result\n",
    "\n",
    "    def render_html(self, obj: MyMetric) -> List[BaseWidgetInfo]:\n",
    "        metric_result = obj.get_result()\n",
    "        return [\n",
    "            # helper function for visualisation. More options here More options avaliable https://github.com/evidentlyai/evidently/blob/main/src/evidently/renderers/html_widgets.py\n",
    "            header_text(label=f\"My metrics value is {metric_result.sum_value}\"),\n",
    "        ]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "f8XJ2LiMf7x5"
   },
   "source": [
    "Here is how the metric output looks if you apply it on a sample data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 142
    },
    "id": "TQugO5uTpYzh",
    "outputId": "64fd72e7-a3c7-4579-ae67-7893bf7d9b0d"
   },
   "outputs": [],
   "source": [
    "from sklearn import datasets\n",
    "from evidently import ColumnMapping\n",
    "from evidently.report import Report\n",
    "\n",
    "adult_data = datasets.fetch_openml(name='adult', version=2, as_frame='auto')\n",
    "adult = adult_data.frame\n",
    "\n",
    "data_drift_dataset_report = Report(metrics=[\n",
    "    MyMetric(column_name='age')\n",
    "])\n",
    "\n",
    "data_drift_dataset_report.run(reference_data=None, current_data=adult)\n",
    "data_drift_dataset_report"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "MiBUCEMkqj_s"
   },
   "source": [
    "In the previous step, we used a basic visualization. You might want to add more information to the widget: for example, pass the column name, show the metric value for the reference dataset, or add some visualizations (using Plotly).\n",
    "\n",
    "To do that, you need to get these values in the `metric` class and pass to the `render ` using `MetricResult`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "9xBzsABRqA_t"
   },
   "outputs": [],
   "source": [
    "from plotly import graph_objs as go\n",
    "\n",
    "\n",
    "class MyMetricResult(MetricResult):\n",
    "    feature_name: str\n",
    "    current_sum_value: float\n",
    "    x_values_for_hist: list\n",
    "    y_values_for_hist: list\n",
    "    reference_sum_value: Optional[float] # reference data could absence so we will have None in that case\n",
    "\n",
    "\n",
    "class MyMetric(Metric[MyMetricResult]):\n",
    "  column_name: str\n",
    "\n",
    "  def __init__(self, column_name: str) -> None:\n",
    "    self.column_name = column_name\n",
    "    super().__init__()\n",
    "\n",
    "  def calculate(self, data: InputData) -> MyMetricResult:\n",
    "    reference_sum_value = None\n",
    "    if data.reference_data is not None:\n",
    "      reference_sum_value = data.reference_data[self.column_name].sum()\n",
    "    current_sum_value = data.current_data[self.column_name].sum()\n",
    "    # let's pretend we calculate some data for plot\n",
    "    x_values_for_hist = [1, 2]\n",
    "    y_values_for_hist = [2, 4]\n",
    "    return MyMetricResult(\n",
    "        feature_name = self.column_name,\n",
    "        current_sum_value = current_sum_value,\n",
    "        x_values_for_hist = x_values_for_hist,\n",
    "        y_values_for_hist = y_values_for_hist,\n",
    "        reference_sum_value = reference_sum_value\n",
    "    )\n",
    "\n",
    "\n",
    "@default_renderer(wrap_type=MyMetric)\n",
    "class MyMetricRenderer(MetricRenderer):\n",
    "    def render_json(self, obj: MyMetric, include_render: bool = False,\n",
    "        include: \"IncludeOptions\" = None, exclude: \"IncludeOptions\" = None,) -> dict:\n",
    "        result = obj.get_result().get_dict(include_render, include, exclude)\n",
    "        # we don't need plot data here\n",
    "        result.pop(\"x_values_for_hist\", None)\n",
    "        result.pop(\"y_values_for_hist\", None)\n",
    "        return result\n",
    "\n",
    "    def render_html(self, obj: MyMetric) -> List[BaseWidgetInfo]:\n",
    "        metric_result = obj.get_result()\n",
    "        figure = go.Figure(go.Bar(x=metric_result.x_values_for_hist, y=metric_result.y_values_for_hist))\n",
    "\n",
    "        return [\n",
    "            header_text(label=f\"The sum of '{metric_result.feature_name}' column is {metric_result.current_sum_value} (current)\"),\n",
    "            header_text(label=f\"The sum of '{metric_result.feature_name}' column is {metric_result.reference_sum_value} (reference)\"),\n",
    "            plotly_figure(title=\"Example plot\", figure=figure)\n",
    "        ]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BLKRWsOYBJhl"
   },
   "source": [
    "# Use the metric"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PFUfWbcjBN0Y"
   },
   "source": [
    "Here is how you can include the new metric in the Report.\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 790
    },
    "id": "hu-tYSW7wPkG",
    "outputId": "a357bca0-4cac-4789-f9ee-cde983459aa7"
   },
   "outputs": [],
   "source": [
    "data_drift_dataset_report = Report(metrics=[\n",
    "    MyMetric(column_name='age')\n",
    "])\n",
    "\n",
    "data_drift_dataset_report.run(reference_data=None, current_data=adult)\n",
    "data_drift_dataset_report"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_drift_dataset_report.json()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Eaxvt6P8-qTY"
   },
   "source": [
    "# Create a new test"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Fcy3cjP5gYTA"
   },
   "source": [
    "If you want to be able to compare the new metric against a defined condition as part of a Test Suite, you need to create a Test.\n",
    "\n",
    "To make a Test, you need a Metric:\n",
    "\n",
    "*   The metric calculates the value\n",
    "*   The test gets the values, and performs the comparison\n",
    "\n",
    "We already got the metric. Now, let's make a test.\n",
    "\n",
    "When you create a test, you can also define the default test conditions. They will apply if you call the test \"as is\" without passing a custom constraint.\n",
    "\n",
    "**Note:** for simple simple scalar functions instead of `MyMetric` and `MyTest` you can use `CustomValueMetric` and `CustomValueTest` respectively. `CustomValueTest(func=<function(InputData)->float>, title='custom test', lte=0.2)`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "0Mu-QYJ-yOBy"
   },
   "outputs": [],
   "source": [
    "from abc import ABC\n",
    "from evidently.utils.types import Numeric\n",
    "from evidently.renderers.base_renderer import TestHtmlInfo\n",
    "from evidently.renderers.base_renderer import TestRenderer\n",
    "from evidently.tests.base_test import BaseCheckValueTest\n",
    "from evidently.tests.base_test import GroupData\n",
    "from evidently.tests.base_test import GroupingTypes\n",
    "from evidently.tests.base_test import TestValueCondition\n",
    "\n",
    "# make a group for test. It used for grouping tests in the report\n",
    "MY_GROUP = GroupData(\"my_group\", \"My Group\", \"\")\n",
    "GroupingTypes.TestGroup.add_value(MY_GROUP)\n",
    "\n",
    "class MyTest(BaseCheckValueTest, ABC):\n",
    "    name = \"My test\"\n",
    "    group = MY_GROUP.id\n",
    "\n",
    "    column_name: str\n",
    "    # define a metric used for calculation\n",
    "    _metric: MyMetric\n",
    "\n",
    "    def __init__(\n",
    "        self,\n",
    "        column_name: str,\n",
    "        eq: Optional[Numeric] = None,\n",
    "        gt: Optional[Numeric] = None,\n",
    "        gte: Optional[Numeric] = None,\n",
    "        is_in: Optional[List[Union[Numeric, str, bool]]] = None,\n",
    "        lt: Optional[Numeric] = None,\n",
    "        lte: Optional[Numeric] = None,\n",
    "        not_eq: Optional[Numeric] = None,\n",
    "        not_in: Optional[List[Union[Numeric, str, bool]]] = None,\n",
    "    ):\n",
    "        self.column_name = column_name\n",
    "        super().__init__(eq=eq, gt=gt, gte=gte, is_in=is_in, lt=lt, lte=lte, not_eq=not_eq, not_in=not_in)\n",
    "        self._metric = MyMetric(self.column_name)\n",
    "\n",
    "    def get_condition(self) -> TestValueCondition:\n",
    "        # if condition specified like lte=8 or gt=3 etc\n",
    "        if self.condition.has_condition():\n",
    "            return self.condition\n",
    "        # if there is no condition but we have reference and we want to calculate the condition by reference\n",
    "        ref_result = self._metric.get_result().reference_sum_value\n",
    "        if ref_result is not None:\n",
    "          return TestValueCondition(lte=ref_result)\n",
    "        # if there is no condition, no reference data but we have some idea about the value should be\n",
    "        return TestValueCondition(gt=0)\n",
    "\n",
    "    # define the value we will compare against condition\n",
    "    def calculate_value_for_test(self) -> Numeric:\n",
    "        return self._metric.get_result().current_sum_value\n",
    "    # define the way test will look like in a table\n",
    "    def get_description(self, value: Numeric) -> str:\n",
    "        return f\"The sum of '{self._metric.get_result().feature_name}' column is {self._metric.get_result().current_sum_value}. The test threshold is {self.get_condition()}\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "IwxRik0i0cd3"
   },
   "source": [
    "You can add a` render `class for the test as well. This class also should use data from the metric result only.\n",
    "\n",
    "**Note:** it's optional. You can still use the test if you do not define the render. In this case, Evidently will use the information from the 'get_description' method to show the test output in the preview. However, once you click on \"details\" there will be no supporting visualization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "zAwWFo9H1XQh"
   },
   "outputs": [],
   "source": [
    "@default_renderer(wrap_type=MyTest)\n",
    "class MyTestRenderer(TestRenderer):\n",
    "    def render_json(self, obj: MyTest) -> dict:\n",
    "        result = super().render_json(obj)\n",
    "        metric_result = obj._metric.get_result()\n",
    "        result[\"parameters\"][\"condition\"] = obj.get_condition().as_dict()\n",
    "        result[\"parameters\"][\"reference_sum_value\"] = metric_result.reference_sum_value\n",
    "        result[\"parameters\"][\"current_sum_value\"] = metric_result.current_sum_value\n",
    "        return result\n",
    "\n",
    "    def render_html(self, obj: MyTest) -> List[BaseWidgetInfo]:\n",
    "        info = super().render_html(obj)\n",
    "        metric_result = obj._metric.get_result()\n",
    "        figure = go.Figure(go.Bar(x=metric_result.x_values_for_hist, y=metric_result.y_values_for_hist))\n",
    "        info.with_details(\"\", plotly_figure(title=\"Example plot\", figure=figure))\n",
    "        return info"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "QF8sysiFBewA"
   },
   "source": [
    "# Use the new test"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PonPrMpeA_eB"
   },
   "source": [
    "Here is how you can include your new test in a Test Suite:\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 348
    },
    "id": "F3chcKDRgcg5",
    "outputId": "f07a9ea6-be5e-4073-a0a8-871dfe66a20e"
   },
   "outputs": [],
   "source": [
    "from evidently.test_suite import TestSuite\n",
    "\n",
    "my_tests = TestSuite(tests=[\n",
    "    MyTest(column_name='age')\n",
    "])\n",
    "\n",
    "my_tests.run(reference_data=adult[:5000], current_data=adult[5000:10000])\n",
    "my_tests"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "y0t6RJiTBk37"
   },
   "source": [
    "# Create a custom report with the new metric"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WRsoNAwko39-"
   },
   "source": [
    "You can combine your new metric with other metrics available in the library in a single report."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "id": "PXf-QZpZnoaP",
    "outputId": "5ca3af78-bd40-4107-c169-bf5b03d87d93"
   },
   "outputs": [],
   "source": [
    "from evidently.metrics import *\n",
    "\n",
    "data_drift_dataset_report = Report(metrics=[\n",
    "    MyMetric(column_name='age'),\n",
    "    ColumnDriftMetric(column_name='age'),\n",
    "    DatasetMissingValuesMetric(),\n",
    "\n",
    "])\n",
    "\n",
    "data_drift_dataset_report.run(reference_data=adult[:5000], current_data=adult[5000:10000])\n",
    "data_drift_dataset_report"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "hyMWJu7qBz1n"
   },
   "source": [
    "# Create a custom test suite with a new test"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nMzmGz2AB5F9"
   },
   "source": [
    "It works the same way for tests."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 544
    },
    "id": "4Acyeo4apxA8",
    "outputId": "b209240f-eadd-4e28-825d-274744d64dbd"
   },
   "outputs": [],
   "source": [
    "from evidently.tests import *\n",
    "\n",
    "\n",
    "my_tests = TestSuite(tests=[\n",
    "    MyTest(column_name='age'),\n",
    "    TestNumberOfRowsWithMissingValues(),\n",
    "    TestNumberOfConstantColumns()\n",
    "\n",
    "])\n",
    "\n",
    "my_tests.run(reference_data=adult[:5000], current_data=adult[5000:10000])\n",
    "my_tests"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
