{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "50cf75e5",
   "metadata": {},
   "source": [
    "# Example notebook: Spliced Binned-Pareto distribution fitted using a DistributionalTCN\n",
    "\n",
    "This notebook is an example how to\n",
    "(a) use a DistributionalTCN to fit a Spliced Binned-Pareto distribution to given data, and \n",
    "(b) compare the Spliced Binned-Pareto distribution fit to other distributions' fits.\n",
    "\n",
    "1. [Library imports](#imports)\n",
    "1. [Data Generation](#data)\n",
    "1. [Train a Distributional TCN](#tcn)\n",
    "    1. (new) Spliced Binned-Pareto distribution\n",
    "    1. Binned distribution\n",
    "    1. Gaussian distribution\n",
    "1. [Evaluation: Probability-Probability plots on test data](#pp)\n",
    "\n",
    "\n",
    "\n",
    "<div style=\"text-align: right\"> Total Runtime (1 gpu): 30 minutes </div>\n",
    "\n",
    "\n",
    "## Library Imports <a name=\"imports\"></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d33e0cfd",
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import os\n",
    "import random\n",
    "from scipy import stats\n",
    "import time\n",
    "from tqdm import tqdm, trange\n",
    "\n",
    "import torch\n",
    "from torch import optim\n",
    "\n",
    "from spliced_binned_pareto import SplicedBinnedPareto, Binned\n",
    "from distr_tcn import DistributionalTCN\n",
    "from gaussian_model import GaussianModel\n",
    "from training_functions import (\n",
    "    train_step_from_batch,\n",
    "    eval_on_series,\n",
    "    plot_prediction,\n",
    "    highlight_min,\n",
    ")\n",
    "from data_functions import create_ds, create_ds_asymmetric\n",
    "\n",
    "font = {\"family\": \"serif\", \"weight\": \"normal\", \"size\": 12}\n",
    "matplotlib.rc(\"font\", **font)\n",
    "\n",
    "\n",
    "###########################\n",
    "# Get device information\n",
    "###########################\n",
    "cuda_id = \"0\"\n",
    "if torch.cuda.is_available():\n",
    "    dev = f\"cuda:{cuda_id}\"\n",
    "else:\n",
    "    dev = \"cpu\"\n",
    "device = torch.device(dev)\n",
    "print(\"Device is\", device)\n",
    "\n",
    "\n",
    "# Reproducibility\n",
    "seed = 42\n",
    "os.environ[\"PYTHONHASHSEED\"] = str(seed)\n",
    "# Torch RNG\n",
    "torch.manual_seed(seed)\n",
    "torch.cuda.manual_seed(seed)\n",
    "torch.cuda.manual_seed_all(seed)\n",
    "# Python RNG\n",
    "np.random.seed(seed)\n",
    "random.seed(seed)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0787a3aa",
   "metadata": {},
   "source": [
    "## Data Generation <a name=\"data\"></a>\n",
    "\n",
    "Here we generate time series with a sinusoidal mean, and asymmetric heavy-tailed noise."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4d5f093f",
   "metadata": {},
   "outputs": [],
   "source": [
    "t_dof = [10, 10]\n",
    "noise_mult = [0.25, 0.25]\n",
    "xi = [1 / 50.0, 1 / 25.0]\n",
    "\n",
    "train_ts_tensor = create_ds_asymmetric(5_000, t_dof, noise_mult, xi)\n",
    "val_ts_tensor = create_ds_asymmetric(1_000, t_dof, noise_mult, xi)\n",
    "test_ts_tensor = create_ds_asymmetric(1_000, t_dof, noise_mult, xi)\n",
    "\n",
    "train_ts_tensor = train_ts_tensor.to(device)\n",
    "val_ts_tensor = val_ts_tensor.to(device)\n",
    "test_ts_tensor = test_ts_tensor.to(device)\n",
    "\n",
    "plt.figure(figsize=(15, 5))\n",
    "plt.plot(train_ts_tensor.cpu().flatten())\n",
    "plt.title(\"Training dataset\")\n",
    "plt.show()\n",
    "\n",
    "plt.figure(figsize=(15, 5))\n",
    "plt.plot(val_ts_tensor.cpu().flatten())\n",
    "plt.title(\"Validation dataset\")\n",
    "plt.show()\n",
    "\n",
    "plt.figure(figsize=(15, 5))\n",
    "plt.plot(test_ts_tensor.cpu().flatten())\n",
    "plt.title(\"Test dataset\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "000346fe",
   "metadata": {},
   "source": [
    "## Train a Distributional TCN <a name=\"tcn\"></a>\n",
    "\n",
    "Here we design a Distributional Temporal Convolution Network (DistributionalTCN) to learn the one-step ahead (lead_time) predictive distribution from the series' previous 100 observations (context_length). We use the DistributionalTCN to compare the fits of 3 predictive distributions: Spliced Binned-Pareto, Binned, and Gaussian.\n",
    "\n",
    "| Distribution      | Avg Time | Approx Total Time (1 gpu) | \n",
    "| ----------- | ----------- | ----------- |\n",
    "| Spliced Binned-Pareto      | 40s/epoch       | 12 min       |\n",
    "| Binned   | 40s/epoch        | 12 min       |\n",
    "| Gaussian   | 3s/epoch        | 2 min       |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "994d1300",
   "metadata": {},
   "outputs": [],
   "source": [
    "context_length = 100\n",
    "lead_time = 1\n",
    "\n",
    "# Defining the main hyperparameters\n",
    "bins_upper_bound = train_ts_tensor.max()\n",
    "bins_lower_bound = train_ts_tensor.min()\n",
    "nbins = 100\n",
    "percentile_tail = 0.05\n",
    "tcn_layers = 4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "906415c6",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "title_methods = [\"Spliced Binned-Pareto\", \"Binned\", \"Gaussian\"]\n",
    "\n",
    "dict_storage = dict(\n",
    "    zip(\n",
    "        list(\n",
    "            map(\n",
    "                lambda x: x.lower().replace(\" \", \"\").replace(\"-\", \"\"),\n",
    "                title_methods,\n",
    "            )\n",
    "        ),\n",
    "        [dict(), dict(), dict()],\n",
    "    )\n",
    ")\n",
    "\n",
    "\n",
    "for title_method in title_methods:\n",
    "    method_str = title_method.lower().replace(\" \", \"\").replace(\"-\", \"\")\n",
    "\n",
    "    ####################################################################################\n",
    "    # Specifying the predictive output distribution\n",
    "    ####################################################################################\n",
    "    if method_str == \"splicedbinnedpareto\":\n",
    "\n",
    "        spliced_binned_pareto_distr = SplicedBinnedPareto(\n",
    "            bins_lower_bound=bins_lower_bound,\n",
    "            bins_upper_bound=bins_upper_bound,\n",
    "            nbins=nbins,\n",
    "            percentile_gen_pareto=torch.tensor(percentile_tail),\n",
    "            validate_args=None,\n",
    "        )\n",
    "        spliced_binned_pareto_distr.to_device(device)\n",
    "\n",
    "        output_distribution = spliced_binned_pareto_distr\n",
    "        output_channels = nbins + 4\n",
    "\n",
    "    if method_str == \"binned\":\n",
    "\n",
    "        binned_distr = Binned(\n",
    "            bins_lower_bound=bins_lower_bound,\n",
    "            bins_upper_bound=bins_upper_bound,\n",
    "            nbins=nbins,\n",
    "            validate_args=None,\n",
    "        )\n",
    "        binned_distr.to_device(device)\n",
    "\n",
    "        output_distribution = binned_distr\n",
    "        output_channels = nbins\n",
    "\n",
    "    if method_str == \"gaussian\":\n",
    "\n",
    "        gaussian_distr = GaussianModel(\n",
    "            mu=torch.tensor(0.0), sigma=torch.tensor(1.0), device=device\n",
    "        )\n",
    "        gaussian_distr.to_device(device)\n",
    "\n",
    "        output_distribution = gaussian_distr\n",
    "        output_channels = 2\n",
    "\n",
    "    ####################################################################################\n",
    "    # Creating the Distributional TCN\n",
    "    ####################################################################################\n",
    "    distr_tcn = DistributionalTCN(\n",
    "        in_channels=1,  # channels in the time series (univariate)\n",
    "        out_channels=output_channels,  # channels in the time series (num parameters)\n",
    "        kernel_size=3,\n",
    "        channels=3,  # channels inside the TCN, keep equal to out_channels for simplicity, expand for better performance\n",
    "        layers=tcn_layers,  # number of TCN blocks\n",
    "        output_distr=output_distribution,\n",
    "    )\n",
    "    distr_tcn.to(device)\n",
    "    distr_tcn = distr_tcn.float()\n",
    "\n",
    "    ####################################################################################\n",
    "    # Training TCN for predictive distribution:\n",
    "    ####################################################################################\n",
    "    learning_rate = 0.0002\n",
    "    optimizer = optim.Adam(params=distr_tcn.parameters(), lr=learning_rate)\n",
    "\n",
    "    ts_len = train_ts_tensor.shape[2]\n",
    "    val_ts_len = val_ts_tensor.shape[2]\n",
    "    epochs = 25\n",
    "    epoch_mod = 5\n",
    "\n",
    "    train_losses = []\n",
    "    val_losses = []\n",
    "    predictions_list = []\n",
    "\n",
    "    dict_storage[method_str] = dict(\n",
    "        (\n",
    "            (k, eval(k))\n",
    "            for k in (\n",
    "                \"method_str\",\n",
    "                \"title_method\",\n",
    "                \"learning_rate\",\n",
    "                \"epochs\",\n",
    "                \"context_length\",\n",
    "                \"lead_time\",\n",
    "            )\n",
    "        )\n",
    "    )\n",
    "\n",
    "    ####################################################################################\n",
    "    # Train model\n",
    "    # Running a DistributionalTCN on data\n",
    "    ####################################################################################\n",
    "    start = time.time()\n",
    "\n",
    "    t = trange(epochs, desc=f\"[{title_method}]\", leave=True)\n",
    "    for epoch in t:\n",
    "\n",
    "        log_loss_train = eval_on_series(\n",
    "            distr_tcn,\n",
    "            optimizer,\n",
    "            train_ts_tensor,\n",
    "            ts_len,\n",
    "            context_length,\n",
    "            is_train=True,\n",
    "            return_predictions=False,\n",
    "            lead_time=lead_time,\n",
    "        )\n",
    "        epoch_train_loss = np.mean(log_loss_train)\n",
    "        train_losses.append(epoch_train_loss)\n",
    "\n",
    "        if epoch % epoch_mod == 0:\n",
    "            log_loss_val, epoch_predictions = eval_on_series(\n",
    "                distr_tcn,\n",
    "                optimizer,\n",
    "                val_ts_tensor,\n",
    "                val_ts_len,\n",
    "                context_length,\n",
    "                is_train=False,\n",
    "                return_predictions=True,\n",
    "                lead_time=lead_time,\n",
    "            )\n",
    "            predictions_list.append(epoch_predictions)\n",
    "            epoch_val_loss = np.mean(log_loss_val)\n",
    "            val_losses.append(epoch_val_loss)\n",
    "\n",
    "            if (\n",
    "                method_str == \"splicedbinnedpareto\"\n",
    "            ):  # No need to plot validation updates for all methods; demonstrate for one\n",
    "                plot_prediction(\n",
    "                    val_ts_tensor,\n",
    "                    predictions_list[-1],\n",
    "                    context_length,\n",
    "                    lead_time,\n",
    "                    end=1500,\n",
    "                )\n",
    "                title = plt.gca().get_title()\n",
    "                title = title_method + \" MAE: \" + title\n",
    "                plt.title(title)\n",
    "                plt.show()\n",
    "\n",
    "            t.set_description(\n",
    "                f\"[{title_method}] Train loss: {epoch_train_loss:.3f}, Val\"\n",
    "                f\" loss: {epoch_val_loss:.3f}\"\n",
    "            )\n",
    "            t.refresh()\n",
    "\n",
    "    end = time.time()\n",
    "    print(\"runtime:\", end - start)\n",
    "    dict_storage[method_str][\"runtime\"] = end - start\n",
    "    dict_storage[method_str][\"distr_tcn\"] = distr_tcn\n",
    "\n",
    "    ####################################################################################\n",
    "    # Plot losses\n",
    "    ####################################################################################\n",
    "    fig = plt.figure(figsize=[16, 4])\n",
    "    spec = gridspec.GridSpec(ncols=4, nrows=1, figure=fig)\n",
    "\n",
    "    f_ax1 = fig.add_subplot(spec[0, 0])\n",
    "    plt.plot(train_losses, label=\"Training\")\n",
    "    plt.plot(\n",
    "        epoch_mod + epoch_mod * np.arange(len(val_losses)),\n",
    "        [i for i in val_losses],\n",
    "        label=\"Validation\",\n",
    "    )\n",
    "    plt.legend()\n",
    "    plt.xlabel(\"Epoch\")\n",
    "    plt.ylabel(\"Loss\")\n",
    "    plt.title(f\"{title_method} DistributionalTCN\")\n",
    "\n",
    "    ####################################################################################\n",
    "    # Plot of distribution fit on validation data\n",
    "    ####################################################################################\n",
    "    f_ax2 = fig.add_subplot(spec[0, 1:])\n",
    "    fig = plot_prediction(\n",
    "        val_ts_tensor,\n",
    "        predictions_list[-1],\n",
    "        context_length,\n",
    "        lead_time,\n",
    "        start=0,\n",
    "        end=val_ts_len,\n",
    "        fig=f_ax2,\n",
    "    )\n",
    "    title = plt.gca().get_title()\n",
    "    plt.ylim(-5.0, 30.0)\n",
    "    plt.xlabel(f\"Time increment $t$\")\n",
    "    plt.ylabel(r\"Series values\")\n",
    "    plt.ylim(-5.0, 30.0)\n",
    "    title = title_method + \" validation MAE: \" + title\n",
    "    plt.title(title)\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c17de981",
   "metadata": {},
   "source": [
    "## Evaluation: Probability-Probability plots on test data <a name=\"pp\"></a>\n",
    "\n",
    "We evaluate the accuracy of the density estimation of each of the method using Probability-Probability (PP) plots (PP-plots). For a given quantile level $q$, we compute $ y_q$ the fraction of points that fell below the given quantile $z_q{(t)} $ of their corresponding predictive distribution:\n",
    "\\begin{align}\n",
    "  y_q = \\frac{\\sum_{t=2}^{T} \\mathbb{I}[ {x}_t < z_{1-q}{(t)} ] }{T}, \\hspace{40pt}   z_q{(t)} : p\\left( {x}_{t} > z_q{(t)} \\middle| {x}_{1:t-1} \\right)< q\n",
    "\\end{align}\n",
    "To obtain a quantitative score, we measure how good the tail estimate is by computing the Mean Absolute Error (MAE) between  $y_q $ and $q $ for all measured quantiles $q$.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c2531b43",
   "metadata": {},
   "outputs": [],
   "source": [
    "def quantile_to_str(q):\n",
    "    \"\"\"\n",
    "    Quick function to cast quantile decimal to q-prefixed string\n",
    "    \"\"\"\n",
    "    return \"q-\" + str(np.round(q, 3))\n",
    "\n",
    "\n",
    "lower_tail_end = percentile_tail\n",
    "upper_tail_start = 1 - percentile_tail\n",
    "\n",
    "likelihoods_of_interest = np.linspace(0.001, lower_tail_end, 25)\n",
    "quantile_levels = torch.tensor(\n",
    "    np.unique(\n",
    "        np.round(\n",
    "            np.concatenate(\n",
    "                (\n",
    "                    likelihoods_of_interest,\n",
    "                    np.linspace(lower_tail_end, upper_tail_start, 81),\n",
    "                    1 - likelihoods_of_interest,\n",
    "                )\n",
    "            ),\n",
    "            3,\n",
    "        )\n",
    "    )\n",
    ")\n",
    "quantile_strs = list(map(quantile_to_str, quantile_levels.numpy()))\n",
    "quantile_levels = quantile_levels.to(torch.device(dev))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "69e9b43f",
   "metadata": {},
   "outputs": [],
   "source": [
    "ts_out_tensor = test_ts_tensor.float()\n",
    "ts_len = ts_out_tensor.shape[2]\n",
    "data_out = dict(\n",
    "    time=np.arange(ts_out_tensor.shape[-1] + lead_time),\n",
    "    ts=np.concatenate(\n",
    "        (ts_out_tensor.cpu().squeeze(), np.array([np.nan] * lead_time))\n",
    "    ),\n",
    ")\n",
    "\n",
    "\n",
    "for method_str in list(dict_storage.keys()):\n",
    "\n",
    "    # Get the stored DistributionTCN fitted for the given method\n",
    "    distr_tcn = dict_storage[method_str][\"distr_tcn\"]\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "\n",
    "    data_out[method_str] = dict()\n",
    "    for q_str in quantile_strs:\n",
    "        data_out[method_str][q_str] = [np.nan] * (context_length + 2)\n",
    "\n",
    "    # Loop through the time series\n",
    "    t = trange(\n",
    "        ts_len - context_length - lead_time, desc=title_method, leave=True\n",
    "    )\n",
    "    start = time.time()\n",
    "    for i in t:\n",
    "\n",
    "        ts_chunk = ts_out_tensor[:, :, i : i + context_length]\n",
    "        distr_output = distr_tcn(ts_chunk)\n",
    "\n",
    "        # z_q(t) quantile of predictive distribution\n",
    "        quantile_values = distr_output.icdf(quantile_levels)\n",
    "        for qs, qv in zip(quantile_strs, quantile_values):\n",
    "            data_out[method_str][qs].append(qv.item())\n",
    "\n",
    "        if i == t.total - 1:\n",
    "            t.set_description(\n",
    "                f\"[{title_method}] runtime {int(time.time()-start)}s\"\n",
    "            )\n",
    "            t.refresh()\n",
    "\n",
    "    # y_q fraction of points below z_q(t)\n",
    "    calibration_pairs = []\n",
    "    for qs, ql in zip(quantile_strs, quantile_levels.cpu().numpy()):\n",
    "        proportion_observations = np.array(\n",
    "            list(\n",
    "                map(\n",
    "                    lambda x: x[0] < x[1],\n",
    "                    zip(data_out[\"ts\"], data_out[method_str][qs]),\n",
    "                )\n",
    "            )\n",
    "        ).sum() / np.sum(np.isfinite(np.array(data_out[method_str][qs])))\n",
    "        calibration_pairs.append([ql, proportion_observations])\n",
    "    calibration_pairs = np.array(calibration_pairs)\n",
    "    data_out[method_str][\"calibration\"] = calibration_pairs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "639ba40a",
   "metadata": {},
   "outputs": [],
   "source": [
    "fig = plt.figure(figsize=[15, 5], constrained_layout=True)\n",
    "spec = gridspec.GridSpec(ncols=3, nrows=1, figure=fig)\n",
    "mae_table = pd.DataFrame(\n",
    "    [],\n",
    "    columns=[\"Lower tail\", \"Base\", \"Upper tail\", \"Full distribution\"],\n",
    "    index=title_methods,\n",
    ")\n",
    "\n",
    "# Lower tail\n",
    "start = 0.0\n",
    "end = lower_tail_end\n",
    "indices = quantile_levels.cpu().numpy() > start\n",
    "indices *= quantile_levels.cpu().numpy() < end\n",
    "\n",
    "f_ax1 = fig.add_subplot(spec[0, 0])\n",
    "alpha = 0.5\n",
    "plt.plot(\n",
    "    np.linspace(start, end),\n",
    "    np.linspace(start, end),\n",
    "    color=\"gray\",\n",
    "    alpha=alpha,\n",
    "    label=None,\n",
    ")\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs[indices, :])))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Lower tail\"] = mae\n",
    "    plt.scatter(\n",
    "        calibration_pairs[indices, 0],\n",
    "        calibration_pairs[indices, 1],\n",
    "        label=f\"{title_method} {np.round(mae,3)}\",\n",
    "    )\n",
    "plt.legend(title=\"MAE\")\n",
    "plt.xlabel(f\"CDF of fitted distribution\")\n",
    "plt.ylabel(\"Empirical CDF\")\n",
    "# Proportion of data below quantile');\n",
    "plt.title(\"Lower tail PP-plot\")\n",
    "\n",
    "# Base distribution\n",
    "start = lower_tail_end\n",
    "end = upper_tail_start\n",
    "indices = quantile_levels.cpu().numpy() > start\n",
    "indices *= quantile_levels.cpu().numpy() < end\n",
    "\n",
    "f_ax1 = fig.add_subplot(spec[0, 1])\n",
    "alpha = 0.5\n",
    "plt.plot(\n",
    "    np.linspace(start, end),\n",
    "    np.linspace(start, end),\n",
    "    color=\"gray\",\n",
    "    alpha=alpha,\n",
    "    label=None,\n",
    ")\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs[indices, :])))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Base\"] = mae\n",
    "    plt.scatter(\n",
    "        calibration_pairs[indices, 0],\n",
    "        calibration_pairs[indices, 1],\n",
    "        alpha=alpha,\n",
    "        label=f\"{title_method} {np.round(mae,2)}\",\n",
    "    )\n",
    "plt.legend(title=\"MAE\")\n",
    "xlim, ylim = plt.xlim(), plt.ylim()\n",
    "plt.plot(\n",
    "    [xlim[0], lower_tail_end, lower_tail_end],\n",
    "    [lower_tail_end, lower_tail_end, ylim[0]],\n",
    "    color=\"black\",\n",
    "    label=None,\n",
    ")\n",
    "plt.text(0.02, lower_tail_end, \"Lower\\n tail\", ha=\"center\", va=\"bottom\")\n",
    "plt.plot(\n",
    "    [upper_tail_start, upper_tail_start, xlim[1]],\n",
    "    [ylim[1], upper_tail_start, upper_tail_start],\n",
    "    color=\"black\",\n",
    "    label=None,\n",
    ")\n",
    "plt.text(\n",
    "    (xlim[1] - upper_tail_start) / 2 + upper_tail_start,\n",
    "    upper_tail_start - 0.03,\n",
    "    \"Upper\\n tail\",\n",
    "    ha=\"center\",\n",
    "    va=\"top\",\n",
    ")\n",
    "plt.xlabel(f\"CDF of fitted distribution\")\n",
    "plt.title(\"Base PP-plot\")\n",
    "\n",
    "\n",
    "# Upper tail\n",
    "start = upper_tail_start\n",
    "end = 1\n",
    "indices = quantile_levels.cpu().numpy() > start\n",
    "indices *= quantile_levels.cpu().numpy() < end\n",
    "\n",
    "f_ax2 = fig.add_subplot(spec[0, 2])\n",
    "plt.plot(\n",
    "    np.linspace(start, end),\n",
    "    np.linspace(start, end),\n",
    "    color=\"gray\",\n",
    "    alpha=alpha,\n",
    "    label=None,\n",
    ")\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs[indices, :])))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Upper tail\"] = mae\n",
    "    plt.scatter(\n",
    "        calibration_pairs[indices, 0],\n",
    "        calibration_pairs[indices, 1],\n",
    "        label=f\"{title_method} {np.round(mae,3)}\",\n",
    "    )\n",
    "plt.legend(title=\"MAE\")\n",
    "plt.xlabel(f\"CDF of fitted distribution\")\n",
    "plt.title(\"Upper tail PP-plot\")\n",
    "# plt.show()\n",
    "\n",
    "\n",
    "# Full distribution\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs)))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Full distribution\"] = mae\n",
    "\n",
    "\n",
    "# Highlight the method row that achieves the minimum error for each column\n",
    "display(\n",
    "    mae_table.style.set_caption(\"Mean Absolute Error (MAE)\")\n",
    "    .set_table_styles(\n",
    "        [{\"selector\": \"caption\", \"props\": [(\"font-size\", \"16px\")]}]\n",
    "    )\n",
    "    .apply(highlight_min)\n",
    ")\n",
    "print(\"\\n\\n\\n\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
