{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "91b073fa",
   "metadata": {},
   "source": [
    "# Example notebook: Spliced Binned-Pareto distribution fitted using a distributional neural network\n",
    "\n",
    "This notebook is an example how to\n",
    "(a) use a distributional neural network to fit a Spliced Binned-Pareto distribution to given data, and \n",
    "(b) compare the Spliced Binned-Pareto distribution fit to other distributions' fits. This notebook was built was Python v3.7.9 and GluonTS v0.10.4.\n",
    "\n",
    "1. [Library imports](#imports)\n",
    "1. [Data generation](#data)\n",
    "1. [Train a Distributional neural network](#tcn)\n",
    "    1. (new) Spliced Binned-Pareto distribution\n",
    "    1. Gaussian distribution\n",
    "1. [Evaluation: Probability-Probability plots on test data](#pp)\n",
    "\n",
    "\n",
    "\n",
    "<div style=\"text-align: right\"> Total Runtime (1 cpu): 10 minutes </div>\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "## Library Imports <a name=\"imports\"></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "26a3b5c3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# %matplotlib notebook\n",
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "import pandas as pd\n",
    "from pydantic import PositiveFloat, PositiveInt\n",
    "import numpy as np\n",
    "import os\n",
    "import pprint\n",
    "import random\n",
    "from scipy import stats \n",
    "from scipy.special import softmax\n",
    "import time\n",
    "from tqdm import tqdm, trange\n",
    "from typing import List, Tuple, Optional\n",
    "\n",
    "\n",
    "import torch\n",
    "from torch import optim\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "\n",
    "from gluonts.torch.distributions.distribution_output import (\n",
    "    Distribution,\n",
    "    DistributionOutput,\n",
    "    StudentT, \n",
    "    StudentTOutput,\n",
    "    Normal,\n",
    "    NormalOutput,\n",
    ")\n",
    "\n",
    "from gluonts.torch.distributions.binned_uniforms import (\n",
    "    BinnedUniforms,\n",
    "    BinnedUniformsOutput,\n",
    ")\n",
    "\n",
    "from gluonts.torch.distributions.spliced_binned_pareto import (\n",
    "    SplicedBinnedPareto,\n",
    "    SplicedBinnedParetoOutput,\n",
    ")\n",
    "\n",
    "from gluonts.nursery.spliced_binned_pareto.training_functions import (\n",
    "    plot_prediction,\n",
    "    highlight_min,\n",
    ")\n",
    "from gluonts.nursery.spliced_binned_pareto.data_functions import create_ds, create_ds_asymmetric\n",
    "\n",
    "\n",
    "font = {\"family\": \"serif\", \"weight\": \"normal\", \"size\": 12}\n",
    "matplotlib.rc(\"font\", **font)\n",
    "\n",
    "\n",
    "###########################\n",
    "# Get device information\n",
    "###########################\n",
    "cuda_id = \"0\"\n",
    "if torch.cuda.is_available():\n",
    "    dev = f\"cuda:{cuda_id}\"\n",
    "else:\n",
    "    dev = \"cpu\"\n",
    "device = torch.device(dev)\n",
    "print(\"Device is\", device)\n",
    "\n",
    "\n",
    "\n",
    "# Reproducibility\n",
    "seed = 42\n",
    "os.environ[\"PYTHONHASHSEED\"] = str(seed)\n",
    "# Torch RNG\n",
    "torch.manual_seed(seed)\n",
    "torch.cuda.manual_seed(seed)\n",
    "torch.cuda.manual_seed_all(seed)\n",
    "# Python RNG\n",
    "np.random.seed(seed)\n",
    "random.seed(seed)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12227b31",
   "metadata": {},
   "source": [
    "## Data Generation <a name=\"data\"></a>\n",
    "\n",
    "Here we generate time series with a sinusoidal mean, and asymmetric heavy-tailed noise."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "98d7849d",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "t_dof = [10, 10]\n",
    "noise_mult = [0.25, 0.25]\n",
    "xi = [1 / 50.0, 1 / 25.0]\n",
    "\n",
    "train_ts_tensor = create_ds_asymmetric(5_000, t_dof, noise_mult, xi).squeeze().squeeze()\n",
    "val_ts_tensor = create_ds_asymmetric(1_000, t_dof, noise_mult, xi).squeeze().squeeze()\n",
    "test_ts_tensor = create_ds_asymmetric(1_000, t_dof, noise_mult, xi).squeeze().squeeze()\n",
    "\n",
    "train_ts_tensor = train_ts_tensor.to(device)\n",
    "val_ts_tensor = val_ts_tensor.to(device)\n",
    "test_ts_tensor = test_ts_tensor.to(device)\n",
    "\n",
    "plt.figure(figsize=(10, 3))\n",
    "plt.plot(train_ts_tensor.cpu().flatten())\n",
    "plt.title(\"Training dataset\")\n",
    "plt.show()\n",
    "\n",
    "plt.figure(figsize=(10, 3))\n",
    "plt.plot(val_ts_tensor.cpu().flatten())\n",
    "plt.title(\"Validation dataset\")\n",
    "plt.show()\n",
    "\n",
    "plt.figure(figsize=(10, 3))\n",
    "plt.plot(test_ts_tensor.cpu().flatten())\n",
    "plt.title(\"Test dataset\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6e224f49",
   "metadata": {},
   "source": [
    "## Train a Distributional Neural Network <a name=\"tcn\"></a>\n",
    "\n",
    "Here we design a DistributionOutput Neural Network (DistributionalOutputNN) to learn the one-step ahead (lead_time) predictive distribution from the series' previous 100 observations (context_length). We use the DistributionalOutputNN to compare the fits of predictive distributions: Spliced Binned-Pareto, and Gaussian.\n",
    "\n",
    "| Distribution      | Avg Time | Approx Total Time (1 cpu) | \n",
    "| ----------- | ----------- | ----------- |\n",
    "| Spliced Binned-Pareto      | 0.76s/epoch       | 4 min       |\n",
    "| Gaussian   | 0.5s/epoch        | 2.5 min       |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1b217957",
   "metadata": {},
   "outputs": [],
   "source": [
    "context_length = 100\n",
    "lead_time = 1\n",
    "\n",
    "# Defining the main hyperparameters\n",
    "bins_upper_bound = train_ts_tensor.max()\n",
    "bins_lower_bound = train_ts_tensor.min()\n",
    "nbins = 100\n",
    "percentile_tail = 0.05"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "79f2e108",
   "metadata": {},
   "outputs": [],
   "source": [
    "class WindowDataset(Dataset): \n",
    "    \"\"\"\n",
    "    Creates inputs of window size context_length, corresponding to outputs of lead_time ahead\n",
    "    * where both are on the same device\n",
    "    \"\"\"\n",
    "    \n",
    "    def __init__(self, y, context_length=100, lead_time=1, device=device):\n",
    "        self.y = y[context_length:].double()\n",
    "        self.x = y[:-lead_time].double()\n",
    "        self.device = device\n",
    "        self.context_length = context_length\n",
    "        self.lead_time = lead_time\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.y)\n",
    "\n",
    "    def __getitem__(self, index):\n",
    "        return self.x[index:(context_length+index)].to(self.device), self.y[index].to(self.device)\n",
    "    \n",
    "    \n",
    "# Super simple model for Marginal Distribution:\n",
    "def append_layer(\n",
    "    l_layers: List,\n",
    "    input_dimension: int,\n",
    "    output_dimension: int,\n",
    "    dropout_probability: PositiveFloat = PositiveFloat(0.25),\n",
    "):\n",
    "    linear = torch.nn.Linear(input_dimension, output_dimension)\n",
    "    l_layers.append(linear)\n",
    "        \n",
    "    l_layers.append(torch.nn.Dropout(dropout_probability))\n",
    "    l_layers.append(torch.nn.LeakyReLU())\n",
    "    return l_layers\n",
    "\n",
    "\n",
    "class DistributionOutputNN(torch.nn.Module):\n",
    "    \"\"\"\n",
    "    This neural net's forward specifies (or updates) the parameter estimates of the given DistributionOutput\n",
    "    \"\"\"\n",
    "    def __init__(\n",
    "        self,\n",
    "            input_dimension: int = 1, # Dimension of input data\n",
    "        number_hidden_layers: int = 10,\n",
    "        number_hidden_dimensions: int = 30,\n",
    "        distr_output: DistributionOutput = StudentTOutput(),\n",
    "        init_args: Optional[Tuple] = None,\n",
    "        verbose=False,\n",
    "    ):\n",
    "        super().__init__()\n",
    "        \n",
    "        # The number of parameters to fit\n",
    "        self.distr_output = distr_output\n",
    "        self.output_dimension = len(self.distr_output.args_dim)\n",
    "        \n",
    "        ###  Creating the main network:\n",
    "        net_layers = []\n",
    "        \n",
    "        dropout_probability = 0.5\n",
    "        dropout_probability_per_layer = np.linspace(start=dropout_probability, stop=0.0, num=number_hidden_layers)\n",
    "        if number_hidden_layers > 1:\n",
    "            dropout_probability_per_layer[-2] = 0.0\n",
    "        \n",
    "        # We add the first layer:\n",
    "        net_layers = append_layer(net_layers, input_dimension, number_hidden_dimensions, dropout_probability)\n",
    "        \n",
    "        # We add each of the hidden layers:\n",
    "        for i in range(number_hidden_layers - 1):\n",
    "            net_layers = append_layer(net_layers, number_hidden_dimensions, number_hidden_dimensions, dropout_probability_per_layer[i])\n",
    "            \n",
    "        # We add the final layer:\n",
    "        net_layers.append(torch.nn.Linear(number_hidden_dimensions, self.output_dimension))\n",
    "\n",
    "        # The network\n",
    "        self.network = torch.nn.Sequential( *net_layers )\n",
    "        \n",
    "        # Reserve for specifying the distribution fit\n",
    "        self.args_proj = distr_output.get_args_proj(self.output_dimension)\n",
    "        if init_args is not None:\n",
    "            assert (\n",
    "                len(init_args)==self.output_dimension\n",
    "            ), f\"len(init_args) should equal self.output_dimension but {len(init_args)} != {self.output_dimension}\"\n",
    "            self.distr_args = init_args\n",
    "            self.distr = self.distr_output.distribution(init_args)\n",
    "        else:\n",
    "            self.distr_args = None\n",
    "            self.distr = None\n",
    "    \n",
    "    def forward(self, x):\n",
    "        net_out = self.network(x)\n",
    "        net_out_final = net_out.squeeze() # has shape: *batch_size,output_dimension\n",
    "        \n",
    "        self.distr_args = self.args_proj(net_out_final)\n",
    "        self.distr = self.distr_output.distribution(self.distr_args)\n",
    "        \n",
    "        return self.distr   \n",
    "    \n",
    "def print_model_param_values(model):\n",
    "    distr_parameter_names = list(model.distr_output.args_dim.keys())\n",
    "    for i in range(len(distr_parameter_names)):\n",
    "        try:\n",
    "            npvalue = model.distr_output.distribution(model.distr_args).__getattribute__(distr_parameter_names[i]).detach().cpu().numpy()\n",
    "            print(distr_parameter_names[i], npvalue[0])\n",
    "        except:\n",
    "            print(distr_parameter_names[i], \"NoneType\")\n",
    "\n",
    "    \n",
    "def train_NN_with_log_prob(\n",
    "    model: DistributionOutputNN,\n",
    "    training_x: torch.tensor,\n",
    "    learning_rate: float = 0.0005,\n",
    "    weight_decay: PositiveFloat = PositiveFloat(1e-3),\n",
    "    batch_size: int = 100,\n",
    "    epochs: int = 50,\n",
    "    validation_x: Optional[torch.tensor] = None,\n",
    ") -> Distribution:\n",
    "    \"\"\"\n",
    "    This trains a Distribution Neural Net with respect to the negative log-likelihood of its DistributionOutput\n",
    "    \"\"\"\n",
    "    \n",
    "    # Get the model parameters:\n",
    "    model.to(device)\n",
    "    model = model.double()\n",
    "    params = list(model.parameters())\n",
    "    \n",
    "    ## Check that model is being updated\n",
    "    #print_model_param_values(model)\n",
    "    \n",
    "    train_losses = []\n",
    "    val_losses = []\n",
    "    predictions_list = []\n",
    "\n",
    "    optimizer = optim.Adam(params=params, lr=learning_rate, weight_decay=weight_decay)\n",
    "    epoch_mod = 25\n",
    "    \n",
    "    # Training loop:\n",
    "    t = trange(epochs, desc=f\"[{title_method}]\", leave=True)\n",
    "    for epoch in t:\n",
    "\n",
    "        wd_train = WindowDataset(training_x)\n",
    "        loader_train = iter(DataLoader(wd_train, batch_size=batch_size, shuffle=False))\n",
    "        loader_val = iter(DataLoader(WindowDataset(validation_x), batch_size=batch_size, shuffle=False))\n",
    "        \n",
    "        #######################################\n",
    "        # Train over all the batches:\n",
    "        #######################################\n",
    "        model.train()\n",
    "\n",
    "        batch_losses = []\n",
    "        for i in range(0, (len(training_x)-context_length) // batch_size):\n",
    "\n",
    "            # Get the training examples:\n",
    "            input_features, output_y = loader_train.next()\n",
    "\n",
    "            # Compute Neg-Loglikelihood\n",
    "            fitted_distr = model(input_features)\n",
    "            losses = -1 * fitted_distr.log_prob(output_y)\n",
    "\n",
    "            # Back-propagate the loss\n",
    "            loss = torch.mean(losses)\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "            optimizer.zero_grad()\n",
    "\n",
    "            batch_losses.append(loss.item())\n",
    "            \n",
    "            ## Check that model is being updated\n",
    "            #print_model_param_values(model)\n",
    "\n",
    "        epoch_train_loss = np.mean(batch_losses)\n",
    "        train_losses.append(epoch_train_loss)\n",
    "        \n",
    "        predictions = {\n",
    "            \"low_lower\": [],\n",
    "            \"lower\": [],\n",
    "            \"median\": [],\n",
    "            \"upper\": [],\n",
    "            \"up_upper\": [],\n",
    "        }\n",
    "        \n",
    "        if (epoch % epoch_mod == 0) and (validation_x is not None): \n",
    "            model.eval()\n",
    "            batch_val_losses = []\n",
    "            for i in range(0, (len(validation_x)-context_length) // batch_size):\n",
    "\n",
    "                # Get the training examples:\n",
    "                input_features, output_y = loader_val.next()\n",
    "\n",
    "                # Compute Neg-Loglikelihood\n",
    "                fitted_distr = model(input_features)\n",
    "                losses = -1 * fitted_distr.log_prob(output_y)\n",
    "\n",
    "                # Validation loss\n",
    "                loss = torch.mean(losses)\n",
    "                batch_val_losses.append(loss.item())\n",
    "                \n",
    "                # Validation predictions\n",
    "                predictions[\"lower\"].append(fitted_distr.icdf(torch.tensor(0.05, device=device)).detach().cpu().numpy())\n",
    "                predictions[\"median\"].append(fitted_distr.icdf(torch.tensor(0.5, device=device)).detach().cpu().numpy())\n",
    "                predictions[\"upper\"].append(fitted_distr.icdf(torch.tensor(0.95, device=device)).detach().cpu().numpy())\n",
    "                predictions[\"low_lower\"].append(fitted_distr.icdf(torch.tensor(0.01, device=device)).detach().cpu().numpy())\n",
    "                predictions[\"up_upper\"].append(fitted_distr.icdf(torch.tensor(0.99, device=device)).detach().cpu().numpy())\n",
    "            \n",
    "            \n",
    "            epoch_val_loss = np.mean(batch_val_losses)\n",
    "            val_losses.append(epoch_val_loss)\n",
    "            predictions[\"lower\"] = np.array(predictions[\"lower\"]).flatten()\n",
    "            predictions[\"median\"] = np.array(predictions[\"median\"]).flatten()\n",
    "            predictions[\"upper\"] = np.array(predictions[\"upper\"]).flatten()\n",
    "            predictions[\"low_lower\"] = np.array(predictions[\"low_lower\"]).flatten()\n",
    "            predictions[\"up_upper\"] = np.array(predictions[\"up_upper\"]).flatten()\n",
    "            predictions_list.append(predictions)\n",
    "            \n",
    "#             ####################################################################################\n",
    "#             # Plot of intermediary distribution fit on validation data\n",
    "#             ####################################################################################\n",
    "#             #f_ax2 = fig.add_subplot(spec[0, 1:])\n",
    "#             fig = plot_prediction(\n",
    "#                 validation_x,\n",
    "#                 predictions_list[-1],\n",
    "#                 context_length,\n",
    "#                 lead_time,\n",
    "#                 start=0,\n",
    "#                 end=len(validation_x),\n",
    "#                 #fig=None,\n",
    "#             )\n",
    "#             title = plt.gca().get_title()  #MAE\n",
    "#             plt.ylim(-5.0, 30.0)\n",
    "#             plt.xlabel(f\"Time increment $t$\")\n",
    "#             plt.ylabel(r\"Series values\")\n",
    "#             plt.ylim(-5.0, 30.0)\n",
    "#             title = title_method + \" validation MAE: \" + str(np.round(np.float64(title),3)) + f\", epoch: {epoch}\"\n",
    "#             plt.title(title)\n",
    "#             plt.show()\n",
    "        \n",
    "\n",
    "    ####################################################################################\n",
    "    # Plot losses\n",
    "    ####################################################################################\n",
    "    fig = plt.figure(figsize=[16, 4])\n",
    "    spec = gridspec.GridSpec(ncols=4, nrows=1, figure=fig)\n",
    "\n",
    "    f_ax1 = fig.add_subplot(spec[0, 0])\n",
    "    plt.plot(train_losses, label=\"Training\")\n",
    "    if validation_x is not None:\n",
    "        plt.plot(\n",
    "            epoch_mod + epoch_mod * np.arange(len(val_losses)),\n",
    "            [i for i in val_losses],\n",
    "            label=\"Validation\",\n",
    "        )\n",
    "    plt.legend()\n",
    "    plt.xlabel(\"Epoch\")\n",
    "    plt.ylabel(\"Loss\")\n",
    "    plt.title(f\"{title_method} DistributionalNN\")\n",
    "    \n",
    "    ####################################################################################\n",
    "    # Plot of distribution fit on validation data\n",
    "    ####################################################################################\n",
    "    f_ax2 = fig.add_subplot(spec[0, 1:])\n",
    "    fig = plot_prediction(\n",
    "        validation_x,\n",
    "        predictions_list[-1],\n",
    "        context_length,\n",
    "        lead_time,\n",
    "        start=0,\n",
    "        end=len(validation_x),\n",
    "        fig=f_ax2,\n",
    "    )\n",
    "    title = plt.gca().get_title() #MAE\n",
    "    plt.ylim(-5.0, 30.0)\n",
    "    plt.xlabel(f\"Time increment $t$\")\n",
    "    plt.ylabel(r\"Series values\")\n",
    "    plt.ylim(-5.0, 30.0)\n",
    "    title = title_method + \" validation MAE: \" + title\n",
    "    plt.title(title)\n",
    "    plt.show()\n",
    "        \n",
    "    return fitted_distr\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fac5cf06",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "title_methods = [\"Spliced Binned-Pareto\", \"Gaussian\"]\n",
    "\n",
    "dict_storage = dict(\n",
    "    zip(\n",
    "        list(\n",
    "            map(\n",
    "                lambda x: x.lower().replace(\" \", \"\").replace(\"-\", \"\"),\n",
    "                title_methods\n",
    "            )\n",
    "        ),\n",
    "        list(\n",
    "            map(\n",
    "                lambda title_method: dict({}), \n",
    "                title_methods\n",
    "            )\n",
    "        ),\n",
    "    )\n",
    ")\n",
    "\n",
    "\n",
    "for title_method in title_methods:\n",
    "    method_str = title_method.lower().replace(\" \", \"\").replace(\"-\", \"\")\n",
    "\n",
    "    ####################################################################################\n",
    "    # Specifying the predictive output distribution\n",
    "    ####################################################################################\n",
    "    if method_str == \"splicedbinnedpareto\":\n",
    "\n",
    "        spliced_binned_pareto_distr = SplicedBinnedParetoOutput(\n",
    "            bins_lower_bound=bins_lower_bound, \n",
    "            bins_upper_bound=bins_upper_bound, \n",
    "            num_bins=nbins, \n",
    "            tail_percentile_gen_pareto=percentile_tail\n",
    "        )\n",
    "        output_distribution = spliced_binned_pareto_distr\n",
    "        learning_rate = 0.0003\n",
    "\n",
    "    if method_str == \"gaussian\":\n",
    "\n",
    "        gaussian_distr = NormalOutput()\n",
    "        output_distribution = gaussian_distr\n",
    "        learning_rate = 0.0002\n",
    "        \n",
    "\n",
    "\n",
    "\n",
    "    ####################################################################################\n",
    "    # Creating the Distributional Neural Net\n",
    "    ####################################################################################\n",
    "    model = DistributionOutputNN(\n",
    "        input_dimension = context_length, # Dimension of input data\n",
    "        number_hidden_layers = 10,\n",
    "        number_hidden_dimensions = 30, \n",
    "        distr_output = output_distribution,\n",
    "    )\n",
    "\n",
    "    ####################################################################################\n",
    "    # Training TCN for predictive distribution:\n",
    "    ####################################################################################\n",
    "    epochs = 300\n",
    "    batch_size = 32\n",
    "\n",
    "    dict_storage[method_str] = dict(\n",
    "        (\n",
    "            (k, eval(k))\n",
    "            for k in (\n",
    "                \"method_str\",\n",
    "                \"title_method\",\n",
    "                \"learning_rate\",\n",
    "                \"epochs\",\n",
    "                \"context_length\",\n",
    "                \"lead_time\",\n",
    "            )\n",
    "        )\n",
    "    )\n",
    "\n",
    "    ####################################################################################\n",
    "    # Train model\n",
    "    # Running a Distributional NN on data\n",
    "    # Maximum Likelihood Estimation on the DistributionOutput\n",
    "    ####################################################################################\n",
    "    start = time.time()\n",
    "\n",
    "    distr_hat = train_NN_with_log_prob(\n",
    "                    model, \n",
    "                    train_ts_tensor,\n",
    "                    learning_rate=learning_rate,\n",
    "                    batch_size=batch_size,\n",
    "                    epochs=epochs, \n",
    "                    validation_x = val_ts_tensor,\n",
    "                )\n",
    "\n",
    "\n",
    "    end = time.time()\n",
    "    print(\"runtime:\", end - start)\n",
    "    dict_storage[method_str][\"runtime\"] = end - start\n",
    "    dict_storage[method_str][\"model\"] = model\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cfe96a5e",
   "metadata": {},
   "source": [
    "## Evaluation: Probability-Probability plots on test data <a name=\"pp\"></a>\n",
    "\n",
    "We evaluate the accuracy of the density estimation of each of the method using Probability-Probability (PP) plots (PP-plots). For a given quantile level $q$, we compute $ y_q$ the fraction of points that fell below the given quantile $z_q{(t)} $ of their corresponding predictive distribution:\n",
    "\\begin{align}\n",
    "  y_q = \\frac{\\sum_{t=2}^{T} \\mathbb{I}[ {x}_t < z_{1-q}{(t)} ] }{T}, \\hspace{40pt}   z_q{(t)} : p\\left( {x}_{t} > z_q{(t)} \\middle| {x}_{1:t-1} \\right)< q\n",
    "\\end{align}\n",
    "To obtain a quantitative score, we measure how good the tail estimate is by computing the Mean Absolute Error (MAE) between  $y_q $ and $q $ for all measured quantiles $q$.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e267b0de",
   "metadata": {},
   "outputs": [],
   "source": [
    "def quantile_to_str(q):\n",
    "    \"\"\"\n",
    "    Quick function to cast quantile decimal to q-prefixed string\n",
    "    \"\"\"\n",
    "    return \"q-\" + str(np.round(q, 3))\n",
    "\n",
    "\n",
    "lower_tail_end = percentile_tail\n",
    "upper_tail_start = 1 - percentile_tail\n",
    "\n",
    "likelihoods_of_interest = np.linspace(0.001, lower_tail_end, 25)\n",
    "quantile_levels = torch.tensor(\n",
    "    np.unique(\n",
    "        np.round(\n",
    "            np.concatenate(\n",
    "                (\n",
    "                    likelihoods_of_interest,\n",
    "                    np.linspace(lower_tail_end, upper_tail_start, 81),\n",
    "                    1 - likelihoods_of_interest,\n",
    "                )\n",
    "            ),\n",
    "            3,\n",
    "        )\n",
    "    )\n",
    ")\n",
    "quantile_strs = list(map(quantile_to_str, quantile_levels.numpy()))\n",
    "quantile_levels = quantile_levels.to(torch.device(dev))\n",
    "\n",
    "ts_out_tensor = test_ts_tensor.float()\n",
    "ts_len = ts_out_tensor.shape[-1]\n",
    "data_out = dict(\n",
    "    time=np.arange(ts_out_tensor.shape[-1] + lead_time),\n",
    "    ts=np.concatenate(\n",
    "        (ts_out_tensor.cpu().squeeze(), np.array([np.nan] * lead_time))\n",
    "    ),\n",
    ")\n",
    "\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    print('method_str', method_str)\n",
    "\n",
    "    # Get the stored DistributionNN fitted for the given method\n",
    "    model = dict_storage[method_str][\"model\"]\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "\n",
    "    data_out[method_str] = dict()\n",
    "    for q_str in quantile_strs:\n",
    "        data_out[method_str][q_str] = [np.nan] * (context_length + 2)\n",
    "\n",
    "        \n",
    "    # Loop through the time series\n",
    "    loader_test = iter(DataLoader(WindowDataset(ts_out_tensor), batch_size=(ts_len - context_length), shuffle=False))\n",
    "    start = time.time()\n",
    "    for i in trange(1):#t:\n",
    "\n",
    "        input_features, output = loader_test.next()\n",
    "        distr_output = model(input_features)\n",
    "\n",
    "        # z_q(t) quantile of predictive distribution\n",
    "        for qs, ql in zip(quantile_strs, quantile_levels):\n",
    "            qv = distr_output.icdf(ql)\n",
    "            data_out[method_str][qs] = np.concatenate([data_out[method_str][qs], qv.detach().cpu().numpy()])\n",
    "\n",
    "\n",
    "\n",
    "    # y_q fraction of points below z_q(t)\n",
    "    calibration_pairs = []\n",
    "    for qs, ql in zip(quantile_strs, quantile_levels.cpu().numpy()):\n",
    "        proportion_observations = np.array(\n",
    "            list(\n",
    "                map(\n",
    "                    lambda x: x[0] < x[1],\n",
    "                    zip(data_out[\"ts\"], data_out[method_str][qs]),\n",
    "                )\n",
    "            )\n",
    "        ).sum() / np.sum(np.isfinite(np.array(data_out[method_str][qs])))\n",
    "        calibration_pairs.append([ql, proportion_observations])\n",
    "    calibration_pairs = np.array(calibration_pairs)\n",
    "    data_out[method_str][\"calibration\"] = calibration_pairs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b50d8f70",
   "metadata": {},
   "outputs": [],
   "source": [
    "fig = plt.figure(figsize=[15, 5], constrained_layout=True)\n",
    "spec = gridspec.GridSpec(ncols=3, nrows=1, figure=fig)\n",
    "mae_table = pd.DataFrame(\n",
    "    [],\n",
    "    columns=[\"Lower tail\", \"Base\", \"Upper tail\", \"Full distribution\"],\n",
    "    index=title_methods,\n",
    ")\n",
    "\n",
    "# Lower tail\n",
    "start = 0.0\n",
    "end = lower_tail_end\n",
    "indices = quantile_levels.cpu().numpy() > start\n",
    "indices *= quantile_levels.cpu().numpy() < end\n",
    "\n",
    "f_ax1 = fig.add_subplot(spec[0, 0])\n",
    "alpha = 0.5\n",
    "plt.plot(\n",
    "    np.linspace(start, end),\n",
    "    np.linspace(start, end),\n",
    "    color=\"gray\",\n",
    "    alpha=alpha,\n",
    "    label=None,\n",
    ")\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs[indices, :])))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Lower tail\"] = mae\n",
    "    plt.scatter(\n",
    "        calibration_pairs[indices, 0],\n",
    "        calibration_pairs[indices, 1],\n",
    "        label=f\"{title_method} {np.round(mae,3)}\",\n",
    "    )\n",
    "plt.legend(title=\"MAE\")\n",
    "plt.xlabel(f\"CDF of fitted distribution\")\n",
    "plt.ylabel(\"Empirical CDF\")\n",
    "# Proportion of data below quantile');\n",
    "plt.title(\"Lower tail PP-plot\")\n",
    "\n",
    "# Base distribution\n",
    "start = lower_tail_end\n",
    "end = upper_tail_start\n",
    "indices = quantile_levels.cpu().numpy() > start\n",
    "indices *= quantile_levels.cpu().numpy() < end\n",
    "\n",
    "f_ax1 = fig.add_subplot(spec[0, 1])\n",
    "alpha = 0.5\n",
    "plt.plot(\n",
    "    np.linspace(start, end),\n",
    "    np.linspace(start, end),\n",
    "    color=\"gray\",\n",
    "    alpha=alpha,\n",
    "    label=None,\n",
    ")\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs[indices, :])))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Base\"] = mae\n",
    "    plt.scatter(\n",
    "        calibration_pairs[indices, 0],\n",
    "        calibration_pairs[indices, 1],\n",
    "        alpha=alpha,\n",
    "        label=f\"{title_method} {np.round(mae,2)}\",\n",
    "    )\n",
    "plt.legend(title=\"MAE\")\n",
    "xlim, ylim = plt.xlim(), plt.ylim()\n",
    "plt.plot(\n",
    "    [xlim[0], lower_tail_end, lower_tail_end],\n",
    "    [lower_tail_end, lower_tail_end, ylim[0]],\n",
    "    color=\"black\",\n",
    "    label=None,\n",
    ")\n",
    "plt.text(0.02, lower_tail_end, \"Lower\\n tail\", ha=\"center\", va=\"bottom\")\n",
    "plt.plot(\n",
    "    [upper_tail_start, upper_tail_start, xlim[1]],\n",
    "    [ylim[1], upper_tail_start, upper_tail_start],\n",
    "    color=\"black\",\n",
    "    label=None,\n",
    ")\n",
    "plt.text(\n",
    "    (xlim[1] - upper_tail_start) / 2 + upper_tail_start,\n",
    "    upper_tail_start - 0.03,\n",
    "    \"Upper\\n tail\",\n",
    "    ha=\"center\",\n",
    "    va=\"top\",\n",
    ")\n",
    "plt.xlabel(f\"CDF of fitted distribution\")\n",
    "plt.title(\"Base PP-plot\")\n",
    "\n",
    "\n",
    "# Upper tail\n",
    "start = upper_tail_start\n",
    "end = 1\n",
    "indices = quantile_levels.cpu().numpy() > start\n",
    "indices *= quantile_levels.cpu().numpy() < end\n",
    "\n",
    "f_ax2 = fig.add_subplot(spec[0, 2])\n",
    "plt.plot(\n",
    "    np.linspace(start, end),\n",
    "    np.linspace(start, end),\n",
    "    color=\"gray\",\n",
    "    alpha=alpha,\n",
    "    label=None,\n",
    ")\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs[indices, :])))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Upper tail\"] = mae\n",
    "    plt.scatter(\n",
    "        calibration_pairs[indices, 0],\n",
    "        calibration_pairs[indices, 1],\n",
    "        label=f\"{title_method} {np.round(mae,3)}\",\n",
    "    )\n",
    "plt.legend(title=\"MAE\")\n",
    "plt.xlabel(f\"CDF of fitted distribution\")\n",
    "plt.title(\"Upper tail PP-plot\")\n",
    "# plt.show()\n",
    "\n",
    "\n",
    "# Full distribution\n",
    "for method_str in list(dict_storage.keys()):\n",
    "    calibration_pairs = data_out[method_str][\"calibration\"]\n",
    "    mae = np.mean(np.abs(np.diff(calibration_pairs)))\n",
    "    title_method = dict_storage[method_str][\"title_method\"]\n",
    "    mae_table.loc[title_method, \"Full distribution\"] = mae\n",
    "\n",
    "\n",
    "# Highlight the method row that achieves the minimum error for each column\n",
    "display(\n",
    "    mae_table.style.set_caption(\"Mean Absolute Error (MAE)\")\n",
    "    .set_table_styles(\n",
    "        [{\"selector\": \"caption\", \"props\": [(\"font-size\", \"16px\")]}]\n",
    "    )\n",
    "    .apply(highlight_min)\n",
    ")\n",
    "print(\"\\n\\n\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "003a5c15",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
