{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%set_env PYTORCH_ENABLE_MPS_FALLBACK=1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | default_exp models.tft"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# TFT"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In summary Temporal Fusion Transformer (TFT) combines gating layers, an LSTM recurrent encoder, with multi-head attention layers for a multi-step forecasting strategy decoder.<br>TFT's inputs are static exogenous $\\mathbf{x}^{(s)}$, historic exogenous $\\mathbf{x}^{(h)}_{[:t]}$, exogenous available at the time of the prediction $\\mathbf{x}^{(f)}_{[:t+H]}$ and autorregresive features $\\mathbf{y}_{[:t]}$, each of these inputs is further decomposed into categorical and continuous. The network uses a multi-quantile regression to model the following conditional probability:$$\\mathbb{P}(\\mathbf{y}_{[t+1:t+H]}|\\;\\mathbf{y}_{[:t]},\\; \\mathbf{x}^{(h)}_{[:t]},\\; \\mathbf{x}^{(f)}_{[:t+H]},\\; \\mathbf{x}^{(s)})$$\n",
    "\n",
    "**References**<br>\n",
    "- [Jan Golda, Krzysztof Kudrynski. \"NVIDIA, Deep Learning Forecasting Examples\"](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Forecasting/TFT)<br>\n",
    "- [Bryan Lim, Sercan O. Arik, Nicolas Loeff, Tomas Pfister, \"Temporal Fusion Transformers for interpretable multi-horizon time series forecasting\"](https://www.sciencedirect.com/science/article/pii/S0169207021000637)<br>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Figure 1. Temporal Fusion Transformer Architecture.](imgs_models/tft_architecture.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | export\n",
    "from typing import Callable, Optional, Tuple\n",
    "\n",
    "import pandas as pd\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "from torch import Tensor\n",
    "from torch.nn import LayerNorm\n",
    "from neuralforecast.losses.pytorch import MAE\n",
    "from neuralforecast.common._base_model import BaseModel"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | hide\n",
    "import logging\n",
    "import warnings\n",
    "from fastcore.test import test_eq\n",
    "from nbdev.showdoc import show_doc\n",
    "from neuralforecast.common._model_checks import check_model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | hide\n",
    "logging.getLogger(\"pytorch_lightning\").setLevel(logging.ERROR)\n",
    "warnings.filterwarnings(\"ignore\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Auxiliary Functions"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.1 Gating Mechanisms\n",
    "\n",
    "The Gated Residual Network (GRN) provides adaptive depth and network complexity capable of accommodating different size datasets. As residual connections allow for the network to skip the non-linear transformation of input $\\mathbf{a}$ and context $\\mathbf{c}$.\n",
    "\n",
    "\\begin{align}\n",
    "\\eta_{1} &= \\mathrm{ELU}(\\mathbf{W}_{1}\\mathbf{a}+\\mathbf{W}_{2}\\mathbf{c}+\\mathbf{b}_{1}) \\\\\n",
    "\\eta_{2} &= \\mathbf{W}_{2}\\eta_{1}+b_{2} \\\\\n",
    "\\mathrm{GRN}(\\mathbf{a}, \\mathbf{c}) &= \\mathrm{LayerNorm}(a + \\textrm{GLU}(\\eta_{2}))\n",
    "\\end{align}\n",
    "\n",
    "The Gated Linear Unit (GLU) provides the flexibility of supressing unnecesary parts of the GRN. Consider GRN's output $\\gamma$ then GLU transformation is defined by:\n",
    "\n",
    "$$\\mathrm{GLU}(\\gamma) = \\sigma(\\mathbf{W}_{4}\\gamma +b_{4}) \\odot (\\mathbf{W}_{5}\\gamma +b_{5})$$"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Figure 2. Gated Residual Network.](imgs_models/tft_grn.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | exporti\n",
    "def get_activation_fn(activation_str: str) -> Callable:\n",
    "    activation_map = {\n",
    "        \"ReLU\": F.relu,\n",
    "        \"Softplus\": F.softplus,\n",
    "        \"Tanh\": F.tanh,\n",
    "        \"SELU\": F.selu,\n",
    "        \"LeakyReLU\": F.leaky_relu,\n",
    "        \"Sigmoid\": F.sigmoid,\n",
    "        \"ELU\": F.elu,\n",
    "        \"GLU\": F.glu,\n",
    "    }\n",
    "    return activation_map.get(activation_str, F.elu)\n",
    "\n",
    "\n",
    "class MaybeLayerNorm(nn.Module):\n",
    "    def __init__(self, output_size, hidden_size, eps):\n",
    "        super().__init__()\n",
    "        if output_size and output_size == 1:\n",
    "            self.ln = nn.Identity()\n",
    "        else:\n",
    "            self.ln = LayerNorm(output_size if output_size else hidden_size, eps=eps)\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.ln(x)\n",
    "\n",
    "\n",
    "class GLU(nn.Module):\n",
    "    def __init__(self, hidden_size, output_size):\n",
    "        super().__init__()\n",
    "        self.lin = nn.Linear(hidden_size, output_size * 2)\n",
    "\n",
    "    def forward(self, x: Tensor) -> Tensor:\n",
    "        x = self.lin(x)\n",
    "        x = F.glu(x)\n",
    "        return x\n",
    "\n",
    "\n",
    "class GRN(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        input_size,\n",
    "        hidden_size,\n",
    "        output_size=None,\n",
    "        context_hidden_size=None,\n",
    "        dropout=0,\n",
    "        activation=\"ELU\",\n",
    "    ):\n",
    "        super().__init__()\n",
    "        self.layer_norm = MaybeLayerNorm(output_size, hidden_size, eps=1e-3)\n",
    "        self.lin_a = nn.Linear(input_size, hidden_size)\n",
    "        if context_hidden_size is not None:\n",
    "            self.lin_c = nn.Linear(context_hidden_size, hidden_size, bias=False)\n",
    "        self.lin_i = nn.Linear(hidden_size, hidden_size)\n",
    "        self.glu = GLU(hidden_size, output_size if output_size else hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "        self.out_proj = nn.Linear(input_size, output_size) if output_size else None\n",
    "        self.activation_fn = get_activation_fn(activation)\n",
    "\n",
    "    def forward(self, a: Tensor, c: Optional[Tensor] = None):\n",
    "        x = self.lin_a(a)\n",
    "        if c is not None:\n",
    "            x = x + self.lin_c(c).unsqueeze(1)\n",
    "        x = self.activation_fn(x)\n",
    "        x = self.lin_i(x)\n",
    "        x = self.dropout(x)\n",
    "        x = self.glu(x)\n",
    "        y = a if not self.out_proj else self.out_proj(a)\n",
    "        x = x + y\n",
    "        x = self.layer_norm(x)\n",
    "        return x"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.2 Variable Selection Networks\n",
    "\n",
    "TFT includes automated variable selection capabilities, through its variable selection network (VSN) components. The VSN takes the original input $\\{\\mathbf{x}^{(s)}, \\mathbf{x}^{(h)}_{[:t]}, \\mathbf{x}^{(f)}_{[:t]}\\}$ and transforms it through embeddings or linear transformations into a high dimensional space\n",
    "$\\{\\mathbf{E}^{(s)}, \\mathbf{E}^{(h)}_{[:t]}, \\mathbf{E}^{(f)}_{[:t+H]}\\}$. \n",
    "\n",
    "For the observed historic data, the embedding matrix $\\mathbf{E}^{(h)}_{t}$ at time $t$ is a concatenation of $j$ variable $e^{(h)}_{t,j}$ embeddings:\n",
    "\\begin{align}\n",
    "\\mathbf{E}^{(h)}_{t} &= [e^{(h)}_{t,1},\\dots,e^{(h)}_{t,j},\\dots,e^{(h)}_{t,n_{h}}] \\\\\n",
    "\\mathbf{\\tilde{e}}^{(h)}_{t,j} &= \\mathrm{GRN}(e^{(h)}_{t,j})\n",
    "\\end{align}\n",
    "\n",
    "The variable selection weights are given by:\n",
    "$$s^{(h)}_{t}=\\mathrm{SoftMax}(\\mathrm{GRN}(\\mathbf{E}^{(h)}_{t},\\mathbf{E}^{(s)}))$$\n",
    "\n",
    "The VSN processed features are then:\n",
    "$$\\tilde{\\mathbf{E}}^{(h)}_{t}= \\sum_{j} s^{(h)}_{j} \\tilde{e}^{(h)}_{t,j}$$"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Figure 3. Variable Selection Network.](imgs_models/tft_vsn.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | exporti\n",
    "class TFTEmbedding(nn.Module):\n",
    "    def __init__(\n",
    "        self, hidden_size, stat_input_size, futr_input_size, hist_input_size, tgt_size\n",
    "    ):\n",
    "        super().__init__()\n",
    "        # There are 4 types of input:\n",
    "        # 1. Static continuous\n",
    "        # 2. Temporal known a priori continuous\n",
    "        # 3. Temporal observed continuous\n",
    "        # 4. Temporal observed targets (time series obseved so far)\n",
    "\n",
    "        self.hidden_size = hidden_size\n",
    "\n",
    "        self.stat_input_size = stat_input_size\n",
    "        self.futr_input_size = futr_input_size\n",
    "        self.hist_input_size = hist_input_size\n",
    "        self.tgt_size = tgt_size\n",
    "\n",
    "        # Instantiate Continuous Embeddings if size is not None\n",
    "        for attr, size in [\n",
    "            (\"stat_exog_embedding\", stat_input_size),\n",
    "            (\"futr_exog_embedding\", futr_input_size),\n",
    "            (\"hist_exog_embedding\", hist_input_size),\n",
    "            (\"tgt_embedding\", tgt_size),\n",
    "        ]:\n",
    "            if size:\n",
    "                vectors = nn.Parameter(torch.Tensor(size, hidden_size))\n",
    "                bias = nn.Parameter(torch.zeros(size, hidden_size))\n",
    "                torch.nn.init.xavier_normal_(vectors)\n",
    "                setattr(self, attr + \"_vectors\", vectors)\n",
    "                setattr(self, attr + \"_bias\", bias)\n",
    "            else:\n",
    "                setattr(self, attr + \"_vectors\", None)\n",
    "                setattr(self, attr + \"_bias\", None)\n",
    "\n",
    "    def _apply_embedding(\n",
    "        self,\n",
    "        cont: Optional[Tensor],\n",
    "        cont_emb: Tensor,\n",
    "        cont_bias: Tensor,\n",
    "    ):\n",
    "        if cont is not None:\n",
    "            # the line below is equivalent to following einsums\n",
    "            # e_cont = torch.einsum('btf,fh->bthf', cont, cont_emb)\n",
    "            # e_cont = torch.einsum('bf,fh->bhf', cont, cont_emb)\n",
    "            e_cont = torch.mul(cont.unsqueeze(-1), cont_emb)\n",
    "            e_cont = e_cont + cont_bias\n",
    "            return e_cont\n",
    "\n",
    "        return None\n",
    "\n",
    "    def forward(self, target_inp, stat_exog=None, futr_exog=None, hist_exog=None):\n",
    "        # temporal/static categorical/continuous known/observed input\n",
    "        # tries to get input, if fails returns None\n",
    "\n",
    "        # Static inputs are expected to be equal for all timesteps\n",
    "        # For memory efficiency there is no assert statement\n",
    "        stat_exog = stat_exog[:, :] if stat_exog is not None else None\n",
    "\n",
    "        s_inp = self._apply_embedding(\n",
    "            cont=stat_exog,\n",
    "            cont_emb=self.stat_exog_embedding_vectors,\n",
    "            cont_bias=self.stat_exog_embedding_bias,\n",
    "        )\n",
    "        k_inp = self._apply_embedding(\n",
    "            cont=futr_exog,\n",
    "            cont_emb=self.futr_exog_embedding_vectors,\n",
    "            cont_bias=self.futr_exog_embedding_bias,\n",
    "        )\n",
    "        o_inp = self._apply_embedding(\n",
    "            cont=hist_exog,\n",
    "            cont_emb=self.hist_exog_embedding_vectors,\n",
    "            cont_bias=self.hist_exog_embedding_bias,\n",
    "        )\n",
    "\n",
    "        # Temporal observed targets\n",
    "        # t_observed_tgt = torch.einsum('btf,fh->btfh',\n",
    "        #                               target_inp, self.tgt_embedding_vectors)\n",
    "        target_inp = torch.matmul(\n",
    "            target_inp.unsqueeze(3).unsqueeze(4),\n",
    "            self.tgt_embedding_vectors.unsqueeze(1),\n",
    "        ).squeeze(3)\n",
    "        target_inp = target_inp + self.tgt_embedding_bias\n",
    "\n",
    "        return s_inp, k_inp, o_inp, target_inp\n",
    "\n",
    "\n",
    "class VariableSelectionNetwork(nn.Module):\n",
    "    def __init__(self, hidden_size, num_inputs, dropout, grn_activation):\n",
    "        super().__init__()\n",
    "        self.joint_grn = GRN(\n",
    "            input_size=hidden_size * num_inputs,\n",
    "            hidden_size=hidden_size,\n",
    "            output_size=num_inputs,\n",
    "            context_hidden_size=hidden_size,\n",
    "            activation=grn_activation,\n",
    "        )\n",
    "        self.var_grns = nn.ModuleList(\n",
    "            [\n",
    "                GRN(\n",
    "                    input_size=hidden_size,\n",
    "                    hidden_size=hidden_size,\n",
    "                    dropout=dropout,\n",
    "                    activation=grn_activation,\n",
    "                )\n",
    "                for _ in range(num_inputs)\n",
    "            ]\n",
    "        )\n",
    "\n",
    "    def forward(self, x: Tensor, context: Optional[Tensor] = None):\n",
    "        Xi = x.reshape(*x.shape[:-2], -1)\n",
    "        grn_outputs = self.joint_grn(Xi, c=context)\n",
    "        sparse_weights = F.softmax(grn_outputs, dim=-1)\n",
    "        transformed_embed_list = [m(x[..., i, :]) for i, m in enumerate(self.var_grns)]\n",
    "        transformed_embed = torch.stack(transformed_embed_list, dim=-1)\n",
    "        # the line below performs batched matrix vector multiplication\n",
    "        # for temporal features it's bthf,btf->bth\n",
    "        # for static features it's bhf,bf->bh\n",
    "        variable_ctx = torch.matmul(\n",
    "            transformed_embed, sparse_weights.unsqueeze(-1)\n",
    "        ).squeeze(-1)\n",
    "\n",
    "        return variable_ctx, sparse_weights"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.3. Multi-Head Attention\n",
    "\n",
    "To avoid information bottlenecks from the classic Seq2Seq architecture, TFT \n",
    "incorporates a decoder-encoder attention mechanism inherited transformer architectures ([Li et. al 2019](https://arxiv.org/abs/1907.00235), [Vaswani et. al 2017](https://arxiv.org/abs/1706.03762)). It transform the the outputs of the LSTM encoded temporal features, and helps the decoder better capture long-term relationships.\n",
    "\n",
    "The original multihead attention for each component $H_{m}$ and its query, key, and value representations are denoted by $Q_{m}, K_{m}, V_{m}$, its transformation is given by:\n",
    "\n",
    "\\begin{align}\n",
    "Q_{m} = Q W_{Q,m} \\quad K_{m} = K W_{K,h} \\quad V_{m} = V W_{V,m} \\\\\n",
    "H_{m}=\\mathrm{Attention}(Q_{m}, K_{m}, V_{m}) = \\mathrm{SoftMax}(Q_{m} K^{\\intercal}_{m}/\\mathrm{scale}) \\; V_{m} \\\\\n",
    "\\mathrm{MultiHead}(Q, K, V) = [H_{1},\\dots,H_{M}] W_{M}\n",
    "\\end{align}\n",
    "\n",
    "TFT modifies the original multihead attention to improve its interpretability. To do it it uses shared values $\\tilde{V}$ across heads and employs additive aggregation, $\\mathrm{InterpretableMultiHead}(Q,K,V) = \\tilde{H} W_{M}$. The mechanism has a great resemblence to a single attention layer, but it allows for $M$ multiple attention weights, and can be therefore be interpreted as the average ensemble of $M$ single attention layers.\n",
    "\n",
    "\\begin{align}\n",
    "\\tilde{H} &= \\left(\\frac{1}{M} \\sum_{m} \\mathrm{SoftMax}(Q_{m} K^{\\intercal}_{m}/\\mathrm{scale}) \\right) \\tilde{V} \n",
    "          = \\frac{1}{M} \\sum_{m} \\mathrm{Attention}(Q_{m}, K_{m}, \\tilde{V}) \\\\\n",
    "\\end{align}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | exporti\n",
    "class InterpretableMultiHeadAttention(nn.Module):\n",
    "    def __init__(self, n_head, hidden_size, example_length, attn_dropout, dropout):\n",
    "        super().__init__()\n",
    "        self.n_head = n_head\n",
    "        assert hidden_size % n_head == 0\n",
    "        self.d_head = hidden_size // n_head\n",
    "        self.qkv_linears = nn.Linear(\n",
    "            hidden_size, (2 * self.n_head + 1) * self.d_head, bias=False\n",
    "        )\n",
    "        self.out_proj = nn.Linear(self.d_head, hidden_size, bias=False)\n",
    "\n",
    "        self.attn_dropout = nn.Dropout(attn_dropout)\n",
    "        self.out_dropout = nn.Dropout(dropout)\n",
    "        self.scale = self.d_head**-0.5\n",
    "        self.register_buffer(\n",
    "            \"_mask\",\n",
    "            torch.triu(\n",
    "                torch.full((example_length, example_length), float(\"-inf\")), 1\n",
    "            ).unsqueeze(0),\n",
    "        )\n",
    "\n",
    "    def forward(\n",
    "        self, x: Tensor, mask_future_timesteps: bool = True\n",
    "    ) -> Tuple[Tensor, Tensor]:\n",
    "        # [Batch,Time,MultiHead,AttDim] := [N,T,M,AD]\n",
    "        bs, t, h_size = x.shape\n",
    "        qkv = self.qkv_linears(x)\n",
    "        q, k, v = qkv.split(\n",
    "            (self.n_head * self.d_head, self.n_head * self.d_head, self.d_head), dim=-1\n",
    "        )\n",
    "        q = q.view(bs, t, self.n_head, self.d_head)\n",
    "        k = k.view(bs, t, self.n_head, self.d_head)\n",
    "        v = v.view(bs, t, self.d_head)\n",
    "\n",
    "        # [N,T1,M,Ad] x [N,T2,M,Ad] -> [N,M,T1,T2]\n",
    "        # attn_score = torch.einsum('bind,bjnd->bnij', q, k)\n",
    "        attn_score = torch.matmul(q.permute((0, 2, 1, 3)), k.permute((0, 2, 3, 1)))\n",
    "        attn_score.mul_(self.scale)\n",
    "\n",
    "        if mask_future_timesteps:\n",
    "            attn_score = attn_score + self._mask\n",
    "\n",
    "        attn_prob = F.softmax(attn_score, dim=3)\n",
    "        attn_prob = self.attn_dropout(attn_prob)\n",
    "\n",
    "        # [N,M,T1,T2] x [N,M,T1,Ad] -> [N,M,T1,Ad]\n",
    "        # attn_vec = torch.einsum('bnij,bjd->bnid', attn_prob, v)\n",
    "        attn_vec = torch.matmul(attn_prob, v.unsqueeze(1))\n",
    "        m_attn_vec = torch.mean(attn_vec, dim=1)\n",
    "        out = self.out_proj(m_attn_vec)\n",
    "        out = self.out_dropout(out)\n",
    "\n",
    "        return out, attn_prob"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. TFT Architecture\n",
    "\n",
    "The first TFT's step is embed the original input $\\{\\mathbf{x}^{(s)}, \\mathbf{x}^{(h)}, \\mathbf{x}^{(f)}\\}$ into a high dimensional space $\\{\\mathbf{E}^{(s)}, \\mathbf{E}^{(h)}, \\mathbf{E}^{(f)}\\}$, after which each embedding is gated by a variable selection network (VSN). The static embedding $\\mathbf{E}^{(s)}$ is used as context for variable selection and as initial condition to the LSTM. Finally the encoded variables are fed into the multi-head attention decoder.\n",
    "\n",
    "\\begin{align}\n",
    " c_{s}, c_{e}, (c_{h}, c_{c}) &=\\textrm{StaticCovariateEncoder}(\\mathbf{E}^{(s)}) \\\\ \n",
    "      h_{[:t]}, h_{[t+1:t+H]}  &=\\textrm{TemporalCovariateEncoder}(\\mathbf{E}^{(h)}, \\mathbf{E}^{(f)}, c_{h}, c_{c}) \\\\\n",
    "\\hat{\\mathbf{y}}^{(q)}_{[t+1:t+H]} &=\\textrm{TemporalFusionDecoder}(h_{[t+1:t+H]}, c_{e})\n",
    "\\end{align}"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1 Static Covariate Encoder\n",
    "\n",
    "The static embedding $\\mathbf{E}^{(s)}$ is transformed by the StaticCovariateEncoder into contexts $c_{s}, c_{e}, c_{h}, c_{c}$. Where $c_{s}$ are temporal variable selection contexts, $c_{e}$ are TemporalFusionDecoder enriching contexts, and $c_{h}, c_{c}$ are LSTM's hidden/contexts for the TemporalCovariateEncoder.\n",
    "\n",
    "\\begin{align}\n",
    "c_{s}, c_{e}, (c_{h}, c_{c}) & = \\textrm{GRN}(\\textrm{VSN}(\\mathbf{E}^{(s)}))\n",
    "\\end{align}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | exporti\n",
    "class StaticCovariateEncoder(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        hidden_size,\n",
    "        num_static_vars,\n",
    "        dropout,\n",
    "        grn_activation,\n",
    "        rnn_type=\"lstm\",\n",
    "        n_rnn_layers=1,\n",
    "        one_rnn_initial_state=False,\n",
    "    ):\n",
    "        super().__init__()\n",
    "        self.vsn = VariableSelectionNetwork(\n",
    "            hidden_size=hidden_size,\n",
    "            num_inputs=num_static_vars,\n",
    "            dropout=dropout,\n",
    "            grn_activation=grn_activation,\n",
    "        )\n",
    "        self.rnn_type = rnn_type.lower()\n",
    "\n",
    "        self.n_rnn_layers = n_rnn_layers\n",
    "\n",
    "        self.n_states = 1 if one_rnn_initial_state else n_rnn_layers\n",
    "\n",
    "        n_contexts = 2 + 2 * self.n_states if rnn_type == \"lstm\" else 2 + self.n_states\n",
    "\n",
    "        self.context_grns = nn.ModuleList(\n",
    "            [\n",
    "                GRN(input_size=hidden_size, hidden_size=hidden_size, dropout=dropout)\n",
    "                for _ in range(n_contexts)\n",
    "            ]\n",
    "        )\n",
    "\n",
    "    def forward(self, x: Tensor) -> Tuple[Tensor, Tensor, Tensor, Tensor]:\n",
    "        variable_ctx, sparse_weights = self.vsn(x)\n",
    "\n",
    "        # Context vectors:\n",
    "        # variable selection context\n",
    "        # enrichment context\n",
    "        # state_c context\n",
    "        # state_h context\n",
    "\n",
    "        cs, ce = list(m(variable_ctx) for m in self.context_grns[:2])  # type: ignore\n",
    "\n",
    "        if self.n_states == 1:\n",
    "            ch = torch.cat(\n",
    "                self.n_rnn_layers\n",
    "                * list(\n",
    "                    m(variable_ctx).unsqueeze(0)\n",
    "                    for m in self.context_grns[2 : self.n_states + 2]\n",
    "                )\n",
    "            )\n",
    "\n",
    "            if self.rnn_type == \"lstm\":\n",
    "                cc = torch.cat(\n",
    "                    self.n_rnn_layers\n",
    "                    * list(\n",
    "                        m(variable_ctx).unsqueeze(0)\n",
    "                        for m in self.context_grns[self.n_states + 2 :]\n",
    "                    )\n",
    "                )\n",
    "\n",
    "        else:\n",
    "            ch = torch.cat(\n",
    "                list(\n",
    "                    m(variable_ctx).unsqueeze(0)\n",
    "                    for m in self.context_grns[2 : self.n_states + 2]\n",
    "                )\n",
    "            )\n",
    "\n",
    "            if self.rnn_type == \"lstm\":\n",
    "                cc = torch.cat(\n",
    "                    list(\n",
    "                        m(variable_ctx).unsqueeze(0)\n",
    "                        for m in self.context_grns[self.n_states + 2 :]\n",
    "                    )\n",
    "                )\n",
    "        if self.rnn_type != \"lstm\":\n",
    "            cc = ch\n",
    "\n",
    "        return cs, ce, ch, cc, sparse_weights  # type: ignore"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 Temporal Covariate Encoder\n",
    "\n",
    "TemporalCovariateEncoder encodes the embeddings $\\mathbf{E}^{(h)}, \\mathbf{E}^{(f)}$ and contexts  $(c_{h}, c_{c})$ with an LSTM.\n",
    "\n",
    "\\begin{align}\n",
    "\\tilde{\\mathbf{E}}^{(h)}_{[:t]} & = \\textrm{VSN}(\\mathbf{E}^{(h)}_{[:t]}, c_{s}) \\\\\n",
    "\\tilde{\\mathbf{E}}^{(h)}_{[:t]} &= \\mathrm{LSTM}(\\tilde{\\mathbf{E}}^{(h)}_{[:t]}, (c_{h}, c_{c})) \\\\\n",
    "h_{[:t]} &= \\mathrm{Gate}(\\mathrm{LayerNorm}(\\tilde{\\mathbf{E}}^{(h)}_{[:t]}))\n",
    "\\end{align}\n",
    "\n",
    "An analogous process is repeated for the future data, with the main difference that $\\mathbf{E}^{(f)}$ contains the future available information.\n",
    "\n",
    "\\begin{align}\n",
    "\\tilde{\\mathbf{E}}^{(f)}_{[t+1:t+h]} & = \\textrm{VSN}(\\mathbf{E}^{(h)}_{t+1:t+H}, \\mathbf{E}^{(f)}_{t+1:t+H}, c_{s}) \\\\\n",
    "\\tilde{\\mathbf{E}}^{(f)}_{[t+1:t+h]} &= \\mathrm{LSTM}(\\tilde{\\mathbf{E}}^{(h)}_{[t+1:t+h]}, (c_{h}, c_{c})) \\\\\n",
    "h_{[t+1:t+H]} &= \\mathrm{Gate}(\\mathrm{LayerNorm}(\\tilde{\\mathbf{E}}^{(f)}_{[t+1:t+h]}))\n",
    "\\end{align}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | exporti\n",
    "class TemporalCovariateEncoder(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        hidden_size,\n",
    "        num_historic_vars,\n",
    "        num_future_vars,\n",
    "        dropout,\n",
    "        grn_activation,\n",
    "        rnn_type=\"lstm\",\n",
    "        n_rnn_layers=1,\n",
    "    ):\n",
    "        super(TemporalCovariateEncoder, self).__init__()\n",
    "        self.rnn_type = rnn_type.lower()\n",
    "        self.n_rnn_layers = n_rnn_layers\n",
    "\n",
    "        self.history_vsn = VariableSelectionNetwork(\n",
    "            hidden_size=hidden_size,\n",
    "            num_inputs=num_historic_vars,\n",
    "            dropout=dropout,\n",
    "            grn_activation=grn_activation,\n",
    "        )\n",
    "        if self.rnn_type == \"lstm\":\n",
    "            self.history_encoder = nn.LSTM(\n",
    "                input_size=hidden_size,\n",
    "                hidden_size=hidden_size,\n",
    "                batch_first=True,\n",
    "                num_layers=n_rnn_layers,\n",
    "            )\n",
    "\n",
    "            self.future_encoder = nn.LSTM(\n",
    "                input_size=hidden_size,\n",
    "                hidden_size=hidden_size,\n",
    "                batch_first=True,\n",
    "                num_layers=n_rnn_layers,\n",
    "            )\n",
    "\n",
    "        elif self.rnn_type == \"gru\":\n",
    "            self.history_encoder = nn.GRU(\n",
    "                input_size=hidden_size,\n",
    "                hidden_size=hidden_size,\n",
    "                batch_first=True,\n",
    "                num_layers=n_rnn_layers,\n",
    "            )\n",
    "            self.future_encoder = nn.GRU(\n",
    "                input_size=hidden_size,\n",
    "                hidden_size=hidden_size,\n",
    "                batch_first=True,\n",
    "                num_layers=n_rnn_layers,\n",
    "            )\n",
    "        else:\n",
    "            raise ValueError('RNN type should be in [\"lstm\",\"gru\"] !')\n",
    "\n",
    "        self.future_vsn = VariableSelectionNetwork(\n",
    "            hidden_size=hidden_size,\n",
    "            num_inputs=num_future_vars,\n",
    "            dropout=dropout,\n",
    "            grn_activation=grn_activation,\n",
    "        )\n",
    "\n",
    "        # Shared Gated-Skip Connection\n",
    "        self.input_gate = GLU(hidden_size, hidden_size)\n",
    "        self.input_gate_ln = LayerNorm(hidden_size, eps=1e-3)\n",
    "\n",
    "    def forward(self, historical_inputs, future_inputs, cs, ch, cc):\n",
    "        # [N,X_in,L] -> [N,hidden_size,L]\n",
    "        historical_features, history_vsn_sparse_weights = self.history_vsn(\n",
    "            historical_inputs, cs\n",
    "        )\n",
    "        if self.rnn_type == \"lstm\":\n",
    "            history, state = self.history_encoder(historical_features, (ch, cc))\n",
    "\n",
    "        elif self.rnn_type == \"gru\":\n",
    "            history, state = self.history_encoder(historical_features, ch)\n",
    "\n",
    "        future_features, future_vsn_sparse_weights = self.future_vsn(future_inputs, cs)\n",
    "        future, _ = self.future_encoder(future_features, state)\n",
    "        # torch.cuda.synchronize() # this call gives prf boost for unknown reasons\n",
    "\n",
    "        input_embedding = torch.cat([historical_features, future_features], dim=1)\n",
    "        temporal_features = torch.cat([history, future], dim=1)\n",
    "        temporal_features = self.input_gate(temporal_features)\n",
    "        temporal_features = temporal_features + input_embedding\n",
    "        temporal_features = self.input_gate_ln(temporal_features)\n",
    "        return temporal_features, history_vsn_sparse_weights, future_vsn_sparse_weights"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 Temporal Fusion Decoder\n",
    "\n",
    "The TemporalFusionDecoder enriches the LSTM's outputs with $c_{e}$ and then uses an attention layer, and multi-step adapter.\n",
    "\\begin{align}\n",
    "h_{[t+1:t+H]} &= \\mathrm{MultiHeadAttention}(h_{[:t]}, h_{[t+1:t+H]}, c_{e}) \\\\\n",
    "h_{[t+1:t+H]} &= \\mathrm{Gate}(\\mathrm{LayerNorm}(h_{[t+1:t+H]}) \\\\\n",
    "h_{[t+1:t+H]} &= \\mathrm{Gate}(\\mathrm{LayerNorm}(\\mathrm{GRN}(h_{[t+1:t+H]})) \\\\\n",
    "\\hat{\\mathbf{y}}^{(q)}_{[t+1:t+H]} &= \\mathrm{MLP}(h_{[t+1:t+H]})\n",
    "\\end{align}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | exporti\n",
    "class TemporalFusionDecoder(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        n_head,\n",
    "        hidden_size,\n",
    "        example_length,\n",
    "        encoder_length,\n",
    "        attn_dropout,\n",
    "        dropout,\n",
    "        grn_activation,\n",
    "    ):\n",
    "        super(TemporalFusionDecoder, self).__init__()\n",
    "        self.encoder_length = encoder_length\n",
    "\n",
    "        # ------------- Encoder-Decoder Attention --------------#\n",
    "        self.enrichment_grn = GRN(\n",
    "            input_size=hidden_size,\n",
    "            hidden_size=hidden_size,\n",
    "            context_hidden_size=hidden_size,\n",
    "            dropout=dropout,\n",
    "            activation=grn_activation,\n",
    "        )\n",
    "        self.attention = InterpretableMultiHeadAttention(\n",
    "            n_head=n_head,\n",
    "            hidden_size=hidden_size,\n",
    "            example_length=example_length,\n",
    "            attn_dropout=attn_dropout,\n",
    "            dropout=dropout,\n",
    "        )\n",
    "        self.attention_gate = GLU(hidden_size, hidden_size)\n",
    "        self.attention_ln = LayerNorm(normalized_shape=hidden_size, eps=1e-3)\n",
    "\n",
    "        self.positionwise_grn = GRN(\n",
    "            input_size=hidden_size,\n",
    "            hidden_size=hidden_size,\n",
    "            dropout=dropout,\n",
    "            activation=grn_activation,\n",
    "        )\n",
    "\n",
    "        # ---------------------- Decoder -----------------------#\n",
    "        self.decoder_gate = GLU(hidden_size, hidden_size)\n",
    "        self.decoder_ln = LayerNorm(normalized_shape=hidden_size, eps=1e-3)\n",
    "\n",
    "    def forward(self, temporal_features, ce):\n",
    "        # ------------- Encoder-Decoder Attention --------------#\n",
    "        # Static enrichment\n",
    "        enriched = self.enrichment_grn(temporal_features, c=ce)\n",
    "\n",
    "        # Temporal self attention\n",
    "        x, atten_vect = self.attention(enriched, mask_future_timesteps=True)\n",
    "\n",
    "        # Don't compute historical quantiles\n",
    "        x = x[:, self.encoder_length :, :]\n",
    "        temporal_features = temporal_features[:, self.encoder_length :, :]\n",
    "        enriched = enriched[:, self.encoder_length :, :]\n",
    "\n",
    "        x = self.attention_gate(x)\n",
    "        x = x + enriched\n",
    "        x = self.attention_ln(x)\n",
    "\n",
    "        # Position-wise feed-forward\n",
    "        x = self.positionwise_grn(x)\n",
    "\n",
    "        # ---------------------- Decoder ----------------------#\n",
    "        # Final skip connection\n",
    "        x = self.decoder_gate(x)\n",
    "        x = x + temporal_features\n",
    "        x = self.decoder_ln(x)\n",
    "\n",
    "        return x, atten_vect\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| export\n",
    "class TFT(BaseModel):\n",
    "    \"\"\"TFT\n",
    "\n",
    "    The Temporal Fusion Transformer architecture (TFT) is an Sequence-to-Sequence\n",
    "    model that combines static, historic and future available data to predict an\n",
    "    univariate target. The method combines gating layers, an LSTM recurrent encoder,\n",
    "    with and interpretable multi-head attention layer and a multi-step forecasting\n",
    "    strategy decoder.\n",
    "\n",
    "    **Parameters:**<br>\n",
    "    `h`: int, Forecast horizon. <br>\n",
    "    `input_size`: int, autorregresive inputs size, y=[1,2,3,4] input_size=2 -> y_[t-2:t]=[1,2].<br>\n",
    "    `tgt_size`: int=1, target size.<br>\n",
    "    `stat_exog_list`: str list, static continuous columns.<br>\n",
    "    `hist_exog_list`: str list, historic continuous columns.<br>\n",
    "    `futr_exog_list`: str list, future continuous columns.<br>\n",
    "    `hidden_size`: int, units of embeddings and encoders.<br>\n",
    "    `n_head`: int=4, number of attention heads in temporal fusion decoder.<br>\n",
    "    `attn_dropout`: float (0, 1), dropout of fusion decoder's attention layer.<br>\n",
    "    `grn_activation`: str, activation for the GRN module from ['ReLU', 'Softplus', 'Tanh', 'SELU', 'LeakyReLU', 'Sigmoid', 'ELU', 'GLU'].<br>\n",
    "    `n_rnn_layers`: int=1, number of RNN layers.<br>\n",
    "    `rnn_type`: str=\"lstm\", recurrent neural network (RNN) layer type from [\"lstm\",\"gru\"].<br>\n",
    "    `one_rnn_initial_state`:str=False, Initialize all rnn layers with the same initial states computed from static covariates.<br>\n",
    "    `dropout`: float (0, 1), dropout of inputs VSNs.<br>\n",
    "    `loss`: PyTorch module, instantiated train loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
    "    `valid_loss`: PyTorch module=`loss`, instantiated valid loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
    "    `max_steps`: int=1000, maximum number of training steps.<br>\n",
    "    `learning_rate`: float=1e-3, Learning rate between (0, 1).<br>\n",
    "    `num_lr_decays`: int=-1, Number of learning rate decays, evenly distributed across max_steps.<br>\n",
    "    `early_stop_patience_steps`: int=-1, Number of validation iterations before early stopping.<br>\n",
    "    `val_check_steps`: int=100, Number of training steps between every validation loss check.<br>\n",
    "    `batch_size`: int, number of different series in each batch.<br>\n",
    "    `valid_batch_size`: int=None, number of different series in each validation and test batch.<br>\n",
    "    `windows_batch_size`: int=None, windows sampled from rolled data, default uses all.<br>\n",
    "    `inference_windows_batch_size`: int=-1, number of windows to sample in each inference batch, -1 uses all.<br>\n",
    "    `start_padding_enabled`: bool=False, if True, the model will pad the time series with zeros at the beginning, by input size.<br>\n",
    "    `step_size`: int=1, step size between each window of temporal data.<br>\n",
    "    `scaler_type`: str='robust', type of scaler for temporal inputs normalization see [temporal scalers](https://nixtla.github.io/neuralforecast/common.scalers.html).<br>\n",
    "    `random_seed`: int, random seed initialization for replicability.<br>\n",
    "    `drop_last_loader`: bool=False, if True `TimeSeriesDataLoader` drops last non-full batch.<br>\n",
    "    `alias`: str, optional,  Custom name of the model.<br>\n",
    "    `optimizer`: Subclass of 'torch.optim.Optimizer', optional, user specified optimizer instead of the default choice (Adam).<br>\n",
    "    `optimizer_kwargs`: dict, optional, list of parameters used by the user specified `optimizer`.<br>\n",
    "    `lr_scheduler`: Subclass of 'torch.optim.lr_scheduler.LRScheduler', optional, user specified lr_scheduler instead of the default choice (StepLR).<br>\n",
    "    `lr_scheduler_kwargs`: dict, optional, list of parameters used by the user specified `lr_scheduler`.<br>\n",
    "    `dataloader_kwargs`: dict, optional, list of parameters passed into the PyTorch Lightning dataloader by the `TimeSeriesDataLoader`. <br>\n",
    "    `**trainer_kwargs`: int,  keyword trainer arguments inherited from [PyTorch Lighning's trainer](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.trainer.trainer.Trainer.html?highlight=trainer).<br>\n",
    "\n",
    "    **References:**<br>\n",
    "    - [Bryan Lim, Sercan O. Arik, Nicolas Loeff, Tomas Pfister,\n",
    "    \"Temporal Fusion Transformers for interpretable multi-horizon time series forecasting\"](https://www.sciencedirect.com/science/article/pii/S0169207021000637)\n",
    "    \"\"\"\n",
    "\n",
    "    # Class attributes\n",
    "    EXOGENOUS_FUTR = True\n",
    "    EXOGENOUS_HIST = True\n",
    "    EXOGENOUS_STAT = True\n",
    "    MULTIVARIATE = False    # If the model produces multivariate forecasts (True) or univariate (False)\n",
    "    RECURRENT = False       # If the model produces forecasts recursively (True) or direct (False)\n",
    "\n",
    "    def __init__(\n",
    "        self,\n",
    "        h,\n",
    "        input_size,\n",
    "        tgt_size: int = 1,\n",
    "        stat_exog_list=None,\n",
    "        hist_exog_list=None,\n",
    "        futr_exog_list=None,\n",
    "        hidden_size: int = 128,\n",
    "        n_head: int = 4,\n",
    "        attn_dropout: float = 0.0,\n",
    "        grn_activation: str = \"ELU\",\n",
    "        n_rnn_layers: int = 1,\n",
    "        rnn_type: str = \"lstm\",\n",
    "        one_rnn_initial_state: bool = False,\n",
    "        dropout: float = 0.1,\n",
    "        loss=MAE(),\n",
    "        valid_loss=None,\n",
    "        max_steps: int = 1000,\n",
    "        learning_rate: float = 1e-3,\n",
    "        num_lr_decays: int = -1,\n",
    "        early_stop_patience_steps: int = -1,\n",
    "        val_check_steps: int = 100,\n",
    "        batch_size: int = 32,\n",
    "        valid_batch_size: Optional[int] = None,\n",
    "        windows_batch_size: int = 1024,\n",
    "        inference_windows_batch_size: int = 1024,\n",
    "        start_padding_enabled=False,\n",
    "        step_size: int = 1,\n",
    "        scaler_type: str = \"robust\",\n",
    "        random_seed: int = 1,\n",
    "        drop_last_loader=False,\n",
    "        alias: Optional[str] = None,\n",
    "        optimizer=None,\n",
    "        optimizer_kwargs=None,\n",
    "        lr_scheduler=None,\n",
    "        lr_scheduler_kwargs=None,\n",
    "        dataloader_kwargs=None,\n",
    "        **trainer_kwargs,\n",
    "    ):\n",
    "        # Inherit BaseWindows class\n",
    "        super(TFT, self).__init__(\n",
    "            h=h,\n",
    "            input_size=input_size,\n",
    "            stat_exog_list=stat_exog_list,\n",
    "            hist_exog_list=hist_exog_list,\n",
    "            futr_exog_list=futr_exog_list,\n",
    "            loss=loss,\n",
    "            valid_loss=valid_loss,\n",
    "            max_steps=max_steps,\n",
    "            learning_rate=learning_rate,\n",
    "            num_lr_decays=num_lr_decays,\n",
    "            early_stop_patience_steps=early_stop_patience_steps,\n",
    "            val_check_steps=val_check_steps,\n",
    "            batch_size=batch_size,\n",
    "            valid_batch_size=valid_batch_size,\n",
    "            windows_batch_size=windows_batch_size,\n",
    "            inference_windows_batch_size=inference_windows_batch_size,\n",
    "            start_padding_enabled=start_padding_enabled,\n",
    "            step_size=step_size,\n",
    "            scaler_type=scaler_type,\n",
    "            random_seed=random_seed,\n",
    "            drop_last_loader=drop_last_loader,\n",
    "            alias=alias,\n",
    "            optimizer=optimizer,\n",
    "            optimizer_kwargs=optimizer_kwargs,\n",
    "            lr_scheduler=lr_scheduler,\n",
    "            lr_scheduler_kwargs=lr_scheduler_kwargs,\n",
    "            dataloader_kwargs=dataloader_kwargs,\n",
    "            **trainer_kwargs,\n",
    "        )\n",
    "        self.example_length = input_size + h\n",
    "        self.interpretability_params = dict([])  # type: ignore\n",
    "        self.tgt_size = tgt_size\n",
    "        self.grn_activation = grn_activation\n",
    "        futr_exog_size = max(self.futr_exog_size, 1)\n",
    "        num_historic_vars = futr_exog_size + self.hist_exog_size + tgt_size\n",
    "        self.n_rnn_layers = n_rnn_layers\n",
    "        self.rnn_type = rnn_type.lower()\n",
    "        # ------------------------------- Encoders -----------------------------#\n",
    "        self.embedding = TFTEmbedding(\n",
    "            hidden_size=hidden_size,\n",
    "            stat_input_size=self.stat_exog_size,\n",
    "            futr_input_size=futr_exog_size,\n",
    "            hist_input_size=self.hist_exog_size,\n",
    "            tgt_size=tgt_size,\n",
    "        )\n",
    "\n",
    "        if self.stat_exog_size > 0:\n",
    "            self.static_encoder = StaticCovariateEncoder(\n",
    "                hidden_size=hidden_size,\n",
    "                num_static_vars=self.stat_exog_size,\n",
    "                dropout=dropout,\n",
    "                grn_activation=self.grn_activation,\n",
    "                rnn_type=self.rnn_type,\n",
    "                n_rnn_layers=n_rnn_layers,\n",
    "                one_rnn_initial_state=one_rnn_initial_state,\n",
    "            )\n",
    "\n",
    "        self.temporal_encoder = TemporalCovariateEncoder(\n",
    "            hidden_size=hidden_size,\n",
    "            num_historic_vars=num_historic_vars,\n",
    "            num_future_vars=futr_exog_size,\n",
    "            dropout=dropout,\n",
    "            grn_activation=self.grn_activation,\n",
    "            n_rnn_layers=n_rnn_layers,\n",
    "            rnn_type=self.rnn_type,\n",
    "        )\n",
    "\n",
    "        # ------------------------------ Decoders -----------------------------#\n",
    "        self.temporal_fusion_decoder = TemporalFusionDecoder(\n",
    "            n_head=n_head,\n",
    "            hidden_size=hidden_size,\n",
    "            example_length=self.example_length,\n",
    "            encoder_length=self.input_size,\n",
    "            attn_dropout=attn_dropout,\n",
    "            dropout=dropout,\n",
    "            grn_activation=self.grn_activation,\n",
    "        )\n",
    "\n",
    "        # Adapter with Loss dependent dimensions\n",
    "        self.output_adapter = nn.Linear(\n",
    "            in_features=hidden_size, out_features=self.loss.outputsize_multiplier\n",
    "        )\n",
    "\n",
    "    def forward(self, windows_batch):\n",
    "\n",
    "        # Parsiw windows_batch\n",
    "        y_insample = windows_batch[\"insample_y\"]  # <- [B,T,1]\n",
    "        futr_exog = windows_batch[\"futr_exog\"]\n",
    "        hist_exog = windows_batch[\"hist_exog\"]\n",
    "        stat_exog = windows_batch[\"stat_exog\"]\n",
    "\n",
    "        if futr_exog is None:\n",
    "            futr_exog = y_insample[:, [-1]]\n",
    "            futr_exog = futr_exog.repeat(1, self.example_length, 1)\n",
    "\n",
    "        s_inp, k_inp, o_inp, t_observed_tgt = self.embedding(\n",
    "            target_inp=y_insample,\n",
    "            hist_exog=hist_exog,\n",
    "            futr_exog=futr_exog,\n",
    "            stat_exog=stat_exog,\n",
    "        )\n",
    "\n",
    "        # -------------------------------- Inputs ------------------------------#\n",
    "        # Static context\n",
    "        if s_inp is not None:\n",
    "            cs, ce, ch, cc, static_encoder_sparse_weights = self.static_encoder(s_inp)\n",
    "            # ch, cc = ch.unsqueeze(0), cc.unsqueeze(0)  # LSTM initial states\n",
    "        else:\n",
    "            # If None add zeros\n",
    "            batch_size, example_length, target_size, hidden_size = t_observed_tgt.shape\n",
    "            cs = torch.zeros(size=(batch_size, hidden_size), device=y_insample.device)\n",
    "            ce = torch.zeros(size=(batch_size, hidden_size), device=y_insample.device)\n",
    "            ch = torch.zeros(\n",
    "                size=(self.n_rnn_layers, batch_size, hidden_size),\n",
    "                device=y_insample.device,\n",
    "            )\n",
    "            cc = torch.zeros(\n",
    "                size=(self.n_rnn_layers, batch_size, hidden_size),\n",
    "                device=y_insample.device,\n",
    "            )\n",
    "            static_encoder_sparse_weights = []\n",
    "\n",
    "        # Historical inputs\n",
    "        _historical_inputs = [\n",
    "            k_inp[:, : self.input_size, :],\n",
    "            t_observed_tgt[:, : self.input_size, :],\n",
    "        ]\n",
    "        if o_inp is not None:\n",
    "            _historical_inputs.insert(0, o_inp[:, : self.input_size, :])\n",
    "        historical_inputs = torch.cat(_historical_inputs, dim=-2)\n",
    "        # Future inputs\n",
    "        future_inputs = k_inp[:, self.input_size :]\n",
    "\n",
    "        # ---------------------------- Encode/Decode ---------------------------#\n",
    "        # Embeddings + VSN + LSTM encoders\n",
    "        temporal_features, history_vsn_wgts, future_vsn_wgts = self.temporal_encoder(\n",
    "            historical_inputs=historical_inputs,\n",
    "            future_inputs=future_inputs,\n",
    "            cs=cs,\n",
    "            ch=ch,\n",
    "            cc=cc,\n",
    "        )\n",
    "\n",
    "        # Static enrichment, Attention and decoders\n",
    "        temporal_features, attn_wts = self.temporal_fusion_decoder(\n",
    "            temporal_features=temporal_features, ce=ce\n",
    "        )\n",
    "\n",
    "        # Store params\n",
    "        self.interpretability_params = {\n",
    "            \"history_vsn_wgts\": history_vsn_wgts,\n",
    "            \"future_vsn_wgts\": future_vsn_wgts,\n",
    "            \"static_encoder_sparse_weights\": static_encoder_sparse_weights,\n",
    "            \"attn_wts\": attn_wts,\n",
    "        }\n",
    "\n",
    "        # Adapt output to loss\n",
    "        y_hat = self.output_adapter(temporal_features)\n",
    "\n",
    "        return y_hat\n",
    "\n",
    "    def mean_on_batch(self, tensor):\n",
    "        batch_size = tensor.size(0)\n",
    "        if batch_size > 1:\n",
    "            return tensor.mean(dim=0)\n",
    "        else:\n",
    "            return tensor.squeeze(0)\n",
    "\n",
    "    def feature_importances(self):\n",
    "        \"\"\"\n",
    "        Compute the feature importances for historical, future, and static features.\n",
    "\n",
    "        Returns:\n",
    "            dict: A dictionary containing the feature importances for each feature type.\n",
    "                The keys are 'hist_vsn', 'future_vsn', and 'static_vsn', and the values\n",
    "                are pandas DataFrames with the corresponding feature importances.\n",
    "        \"\"\"\n",
    "        if not self.interpretability_params:\n",
    "            raise ValueError(\n",
    "                \"No interpretability_params. Make a prediction using the model to generate them.\"\n",
    "            )\n",
    "\n",
    "        importances = {}\n",
    "\n",
    "        # Historical feature importances\n",
    "        hist_vsn_wgts = self.interpretability_params.get(\"history_vsn_wgts\")\n",
    "        hist_exog_list = list(self.hist_exog_list) + list(self.futr_exog_list)\n",
    "        hist_exog_list += (\n",
    "            [f\"observed_target_{i+1}\" for i in range(self.tgt_size)]\n",
    "            if self.tgt_size > 1\n",
    "            else [\"observed_target\"]\n",
    "        )\n",
    "        if len(self.futr_exog_list) < 1:\n",
    "            hist_exog_list += [\"repeated_target\"]\n",
    "        hist_vsn_imp = pd.DataFrame(\n",
    "            self.mean_on_batch(hist_vsn_wgts).cpu().numpy(), columns=hist_exog_list\n",
    "        )\n",
    "        importances[\"Past variable importance over time\"] = hist_vsn_imp\n",
    "        #  importances[\"Past variable importance\"] = hist_vsn_imp.mean(axis=0).sort_values()\n",
    "\n",
    "        # Future feature importances\n",
    "        if self.futr_exog_size > 0:\n",
    "            future_vsn_wgts = self.interpretability_params.get(\"future_vsn_wgts\")\n",
    "            future_vsn_imp = pd.DataFrame(\n",
    "                self.mean_on_batch(future_vsn_wgts).cpu().numpy(),\n",
    "                columns=self.futr_exog_list,\n",
    "            )\n",
    "            importances[\"Future variable importance over time\"] = future_vsn_imp\n",
    "        #   importances[\"Future variable importance\"] = future_vsn_imp.mean(axis=0).sort_values()\n",
    "\n",
    "        # Static feature importances\n",
    "        if self.stat_exog_size > 0:\n",
    "            static_encoder_sparse_weights = self.interpretability_params.get(\n",
    "                \"static_encoder_sparse_weights\"\n",
    "            )\n",
    "\n",
    "            static_vsn_imp = pd.DataFrame(\n",
    "                self.mean_on_batch(static_encoder_sparse_weights).cpu().numpy(),\n",
    "                index=self.stat_exog_list,\n",
    "                columns=[\"importance\"],\n",
    "            )\n",
    "            importances[\"Static covariates\"] = static_vsn_imp.sort_values(\n",
    "                by=\"importance\"\n",
    "            )\n",
    "\n",
    "        return importances\n",
    "\n",
    "    def attention_weights(self):\n",
    "        \"\"\"\n",
    "        Batch average attention weights\n",
    "\n",
    "        Returns:\n",
    "        np.ndarray: A 1D array containing the attention weights for each time step.\n",
    "\n",
    "        \"\"\"\n",
    "\n",
    "        attention = (\n",
    "            self.mean_on_batch(self.interpretability_params[\"attn_wts\"])\n",
    "            .mean(dim=0)\n",
    "            .cpu()\n",
    "            .numpy()\n",
    "        )\n",
    "\n",
    "        return attention\n",
    "\n",
    "    def feature_importance_correlations(self) -> pd.DataFrame:\n",
    "        \"\"\"\n",
    "        Compute the correlation between the past and future feature importances and the mean attention weights.\n",
    "\n",
    "        Returns:\n",
    "        pd.DataFrame: A DataFrame containing the correlation coefficients between the past feature importances and the mean attention weights.\n",
    "        \"\"\"\n",
    "        attention = self.attention_weights()[self.input_size :, :].mean(axis=0)\n",
    "        p_c = self.feature_importances()[\"Past variable importance over time\"]\n",
    "        p_c[\"Correlation with Mean Attention\"] = attention[: self.input_size]\n",
    "        return p_c.corr(method=\"spearman\").round(2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "# Unit tests for models\n",
    "logging.getLogger(\"pytorch_lightning\").setLevel(logging.ERROR)\n",
    "logging.getLogger(\"lightning_fabric\").setLevel(logging.ERROR)\n",
    "with warnings.catch_warnings():\n",
    "    warnings.simplefilter(\"ignore\")\n",
    "    check_model(TFT, [\"airpassengers\"])"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. TFT methods"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(TFT.fit, name=\"TFT.fit\", title_level=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(TFT.predict, name=\"TFT.predict\", title_level=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(TFT.feature_importances, name='TFT.feature_importances,', title_level=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(TFT.attention_weights, name='TFT.attention_weights', title_level=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(TFT.attention_weights , name='TFT.attention_weights', title_level=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(TFT.feature_importance_correlations , name='TFT.feature_importance_correlations', title_level=3)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Usage Example"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "from neuralforecast import NeuralForecast\n",
    "\n",
    "# from neuralforecast.models import TFT\n",
    "from neuralforecast.losses.pytorch import DistributionLoss\n",
    "from neuralforecast.utils import AirPassengersPanel, AirPassengersStatic\n",
    "\n",
    "AirPassengersPanel[\"month\"] = AirPassengersPanel.ds.dt.month\n",
    "Y_train_df = AirPassengersPanel[\n",
    "    AirPassengersPanel.ds < AirPassengersPanel[\"ds\"].values[-12]\n",
    "]  # 132 train\n",
    "Y_test_df = AirPassengersPanel[\n",
    "    AirPassengersPanel.ds >= AirPassengersPanel[\"ds\"].values[-12]\n",
    "].reset_index(drop=True)  # 12 test\n",
    "\n",
    "nf = NeuralForecast(\n",
    "    models=[\n",
    "        TFT(\n",
    "            h=12,\n",
    "            input_size=48,\n",
    "            hidden_size=20,\n",
    "            grn_activation=\"ELU\",\n",
    "            rnn_type=\"lstm\",\n",
    "            n_rnn_layers=1,\n",
    "            one_rnn_initial_state=False,\n",
    "            loss=DistributionLoss(distribution=\"StudentT\", level=[80, 90]),\n",
    "            learning_rate=0.005,\n",
    "            stat_exog_list=[\"airline1\"],\n",
    "            futr_exog_list=[\"y_[lag12]\", \"month\"],\n",
    "            hist_exog_list=[\"trend\"],\n",
    "            max_steps=300,\n",
    "            val_check_steps=10,\n",
    "            early_stop_patience_steps=10,\n",
    "            scaler_type=\"robust\",\n",
    "            windows_batch_size=None,\n",
    "            enable_progress_bar=True,\n",
    "        ),\n",
    "    ],\n",
    "    freq=\"ME\",\n",
    ")\n",
    "nf.fit(df=Y_train_df, static_df=AirPassengersStatic, val_size=12)\n",
    "Y_hat_df = nf.predict(futr_df=Y_test_df)\n",
    "\n",
    "# Plot quantile predictions\n",
    "Y_hat_df = Y_hat_df.reset_index(drop=False).drop(columns=[\"unique_id\", \"ds\"])\n",
    "plot_df = pd.concat([Y_test_df, Y_hat_df], axis=1)\n",
    "plot_df = pd.concat([Y_train_df, plot_df])\n",
    "\n",
    "plot_df = plot_df[plot_df.unique_id == \"Airline1\"].drop(\"unique_id\", axis=1)\n",
    "plt.plot(plot_df[\"ds\"], plot_df[\"y\"], c=\"black\", label=\"True\")\n",
    "plt.plot(plot_df[\"ds\"], plot_df[\"TFT\"], c=\"purple\", label=\"mean\")\n",
    "plt.plot(plot_df[\"ds\"], plot_df[\"TFT-median\"], c=\"blue\", label=\"median\")\n",
    "plt.fill_between(\n",
    "    x=plot_df[\"ds\"][-12:],\n",
    "    y1=plot_df[\"TFT-lo-90\"][-12:].values,\n",
    "    y2=plot_df[\"TFT-hi-90\"][-12:].values,\n",
    "    alpha=0.4,\n",
    "    label=\"level 90\",\n",
    ")\n",
    "plt.legend()\n",
    "plt.grid()\n",
    "plt.plot()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Interpretability"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Attention Weights"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "attention = nf.models[0].attention_weights()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "def plot_attention(\n",
    "    self, plot: str = \"time\", output: str = \"plot\", width: int = 800, height: int = 400\n",
    "):\n",
    "    \"\"\"\n",
    "    Plot the attention weights.\n",
    "\n",
    "    Args:\n",
    "        plot (str, optional): The type of plot to generate. Can be one of the following:\n",
    "            - 'time': Display the mean attention weights over time.\n",
    "            - 'all': Display the attention weights for each horizon.\n",
    "            - 'heatmap': Display the attention weights as a heatmap.\n",
    "            - An integer in the range [1, model.h) to display the attention weights for a specific horizon.\n",
    "        output (str, optional): The type of output to generate. Can be one of the following:\n",
    "            - 'plot': Display the plot directly.\n",
    "            - 'figure': Return the plot as a figure object.\n",
    "        width (int, optional): Width of the plot in pixels. Default is 800.\n",
    "        height (int, optional): Height of the plot in pixels. Default is 400.\n",
    "\n",
    "    Returns:\n",
    "        matplotlib.figure.Figure: If `output` is 'figure', the function returns the plot as a figure object.\n",
    "    \"\"\"\n",
    "\n",
    "    attention = (\n",
    "        self.mean_on_batch(self.interpretability_params[\"attn_wts\"])\n",
    "        .mean(dim=0)\n",
    "        .cpu()\n",
    "        .numpy()\n",
    "    )\n",
    "\n",
    "    fig, ax = plt.subplots(figsize=(width / 100, height / 100))\n",
    "\n",
    "    if plot == \"time\":\n",
    "        attention = attention[self.input_size :, :].mean(axis=0)\n",
    "        ax.plot(np.arange(-self.input_size, self.h), attention)\n",
    "        ax.axvline(\n",
    "            x=0, color=\"black\", linewidth=3, linestyle=\"--\", label=\"prediction start\"\n",
    "        )\n",
    "        ax.set_title(\"Mean Attention\")\n",
    "        ax.set_xlabel(\"time\")\n",
    "        ax.set_ylabel(\"Attention\")\n",
    "        ax.legend()\n",
    "\n",
    "    elif plot == \"all\":\n",
    "        for i in range(self.input_size, attention.shape[0]):\n",
    "            ax.plot(\n",
    "                np.arange(-self.input_size, self.h),\n",
    "                attention[i, :],\n",
    "                label=f\"horizon {i-self.input_size+1}\",\n",
    "            )\n",
    "        ax.axvline(\n",
    "            x=0, color=\"black\", linewidth=3, linestyle=\"--\", label=\"prediction start\"\n",
    "        )\n",
    "        ax.set_title(\"Attention per horizon\")\n",
    "        ax.set_xlabel(\"time\")\n",
    "        ax.set_ylabel(\"Attention\")\n",
    "        ax.legend()\n",
    "\n",
    "    elif plot == \"heatmap\":\n",
    "        cax = ax.imshow(\n",
    "            attention,\n",
    "            aspect=\"auto\",\n",
    "            cmap=\"viridis\",\n",
    "            extent=[-self.input_size, self.h, -self.input_size, self.h],\n",
    "        )\n",
    "        fig.colorbar(cax)\n",
    "        ax.set_title(\"Attention Heatmap\")\n",
    "        ax.set_xlabel(\"Attention (current time step)\")\n",
    "        ax.set_ylabel(\"Attention (previous time step)\")\n",
    "\n",
    "    elif isinstance(plot, int) and (plot in np.arange(1, self.h + 1)):\n",
    "        i = self.input_size + plot - 1\n",
    "        ax.plot(\n",
    "            np.arange(-self.input_size, self.h),\n",
    "            attention[i, :],\n",
    "            label=f\"horizon {plot}\",\n",
    "        )\n",
    "        ax.axvline(\n",
    "            x=0, color=\"black\", linewidth=3, linestyle=\"--\", label=\"prediction start\"\n",
    "        )\n",
    "        ax.set_title(f\"Attention weight for horizon {plot}\")\n",
    "        ax.set_xlabel(\"time\")\n",
    "        ax.set_ylabel(\"Attention\")\n",
    "        ax.legend()\n",
    "\n",
    "    else:\n",
    "        raise ValueError(\n",
    "            'plot has to be in [\"time\",\"all\",\"heatmap\"] or integer in range(1,model.h)'\n",
    "        )\n",
    "\n",
    "    plt.tight_layout()\n",
    "\n",
    "    if output == \"plot\":\n",
    "        plt.show()\n",
    "    elif output == \"figure\":\n",
    "        return fig\n",
    "    else:\n",
    "        raise ValueError(f\"Invalid output: {output}. Expected 'plot' or 'figure'.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1.1 Mean attention"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "plot_attention(nf.models[0], plot=\"time\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1.2 Attention of all future time steps"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "plot_attention(nf.models[0], plot=\"all\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1.3 Attention of a specific future time step"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "plot_attention(nf.models[0], plot=8)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Feature Importance\n",
    "### 2.1 Global feature importance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "\n",
    "feature_importances = nf.models[0].feature_importances()\n",
    "feature_importances.keys()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Static variable importances"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "feature_importances[\"Static covariates\"].sort_values(by=\"importance\").plot(kind=\"barh\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Past variable importances"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "feature_importances[\"Past variable importance over time\"].mean().sort_values().plot(\n",
    "    kind=\"barh\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Future variable importances"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "feature_importances[\"Future variable importance over time\"].mean().sort_values().plot(\n",
    "    kind=\"barh\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 Variable importances over time"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Future variable importance over time\n",
    "Importance of each future covariate at each future time step"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "df = feature_importances[\"Future variable importance over time\"]\n",
    "\n",
    "\n",
    "fig, ax = plt.subplots(figsize=(20, 10))\n",
    "bottom = np.zeros(len(df.index))\n",
    "for col in df.columns:\n",
    "    p = ax.bar(np.arange(-len(df), 0), df[col].values, 0.6, label=col, bottom=bottom)\n",
    "    bottom += df[col]\n",
    "ax.set_title(\"Future variable importance over time ponderated by attention\")\n",
    "ax.set_ylabel(\"Importance\")\n",
    "ax.set_xlabel(\"Time\")\n",
    "ax.grid(True)\n",
    "ax.legend()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Past variable importance over time"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "df = feature_importances[\"Past variable importance over time\"]\n",
    "\n",
    "fig, ax = plt.subplots(figsize=(20, 10))\n",
    "bottom = np.zeros(len(df.index))\n",
    "\n",
    "for col in df.columns:\n",
    "    p = ax.bar(np.arange(-len(df), 0), df[col].values, 0.6, label=col, bottom=bottom)\n",
    "    bottom += df[col]\n",
    "ax.set_title(\"Past variable importance over time\")\n",
    "ax.set_ylabel(\"Importance\")\n",
    "ax.set_xlabel(\"Time\")\n",
    "ax.legend()\n",
    "ax.grid(True)\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Past variable importance over time ponderated by attention\n",
    "Decomposition of the importance of each time step based on importance of each variable at that time step"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "df = feature_importances[\"Past variable importance over time\"]\n",
    "mean_attention = (\n",
    "    nf.models[0]\n",
    "    .attention_weights()[nf.models[0].input_size :, :]\n",
    "    .mean(axis=0)[: nf.models[0].input_size]\n",
    ")\n",
    "df = df.multiply(mean_attention, axis=0)\n",
    "\n",
    "fig, ax = plt.subplots(figsize=(20, 10))\n",
    "bottom = np.zeros(len(df.index))\n",
    "\n",
    "for col in df.columns:\n",
    "    p = ax.bar(np.arange(-len(df), 0), df[col].values, 0.6, label=col, bottom=bottom)\n",
    "    bottom += df[col]\n",
    "ax.set_title(\"Past variable importance over time ponderated by attention\")\n",
    "ax.set_ylabel(\"Importance\")\n",
    "ax.set_xlabel(\"Time\")\n",
    "ax.legend()\n",
    "ax.grid(True)\n",
    "plt.plot(\n",
    "    np.arange(-len(df), 0),\n",
    "    mean_attention,\n",
    "    color=\"black\",\n",
    "    marker=\"o\",\n",
    "    linestyle=\"-\",\n",
    "    linewidth=2,\n",
    "    label=\"mean_attention\",\n",
    ")\n",
    "plt.legend()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Variable importance correlations over time\n",
    "Variables which gain and lose importance at same moments"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# | eval: false\n",
    "nf.models[0].feature_importance_correlations()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "python3",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
