{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f11ab7fb",
   "metadata": {},
   "outputs": [],
   "source": [
    "remote = \"https://raw.githubusercontent.com/nansencenter/DA-tutorials\"\n",
    "!wget -qO- {remote}/master/notebooks/resources/colab_bootstrap.sh | bash -s"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4e6215bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "from resources import show_answer, interact, cInterval\n",
    "%matplotlib inline\n",
    "import numpy as np\n",
    "import numpy.random as rnd\n",
    "import matplotlib.pyplot as plt\n",
    "plt.ion();"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "967bdb12",
   "metadata": {},
   "source": [
    "# T4 - Time series filtering\n",
    "\n",
    "Before exploring the full (multivariate) Kalman filter (KF),\n",
    "let's first consider scalar but time-dependent (temporal/sequential) problems.\n",
    "$\n",
    "\\newcommand{\\Expect}[0]{\\mathbb{E}}\n",
    "\\newcommand{\\NormDist}{\\mathscr{N}}\n",
    "\\newcommand{\\DynMod}[0]{\\mathscr{M}}\n",
    "\\newcommand{\\ObsMod}[0]{\\mathscr{H}}\n",
    "\\newcommand{\\mat}[1]{{\\mathbf{{#1}}}}\n",
    "\\newcommand{\\vect}[1]{{\\mathbf{#1}}}\n",
    "\\newcommand{\\ta}[0]{\\text{a}}\n",
    "\\newcommand{\\tf}[0]{\\text{f}}\n",
    "$\n",
    "\n",
    "Consider the scalar, stochastic process $\\{x_k\\}$,\n",
    "generated for sequentially increasing time index $k$ by\n",
    "\n",
    "$$ x_{k+1} = \\DynMod_k x_k + q_k \\,. \\tag{DynMod} $$\n",
    "\n",
    "For our present purposes, the **dynamical \"model\"** $\\DynMod_k$ is simply a known number.\n",
    "Suppose we get observations $\\{y_k\\}$ as in:\n",
    "\n",
    "$$ y_k = \\ObsMod_k x_k + r_k \\,, \\tag{ObsMod} $$\n",
    "\n",
    "The noises and $x_0$ are assumed to be independent of each other and across time\n",
    "(i.e., $\\varepsilon_k$ is independent of $\\varepsilon_l$ for $k \\neq l$),\n",
    "and Gaussian with known parameters:\n",
    "$$x_0 \\sim \\NormDist(x^\\ta_0, P^\\ta_0),\\quad\n",
    "q_k \\sim \\NormDist(0, Q_k),\\quad\n",
    "r_k \\sim \\NormDist(0, R_k) \\,.$$\n",
    "\n",
    "<a name=\"Example-problem:-AR(1)\"></a>\n",
    "\n",
    "## Example problem: AR(1)\n",
    "\n",
    "For simplicity (though the KF does not require these assumptions),\n",
    "suppose that $\\DynMod_k = \\DynMod$, i.e., it is constant in time.\n",
    "Then $\\{x_k\\}$ forms a so-called order-1 auto-regressive process [[Wikipedia](https://en.wikipedia.org/wiki/Autoregressive_model#Example:_An_AR(1)_process)].\n",
    "Similarly, we drop the time dependence (subscript $k$) from $\\ObsMod_k, Q_k, R_k$.\n",
    "The code below simulates a random realization of this process."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6ad09cea",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Use H=1 so that it makes sense to plot data on the same axes as the state.\n",
    "H = 1\n",
    "\n",
    "# Initial estimate\n",
    "xa = 0   # mean\n",
    "Pa = 10  # variance\n",
    "\n",
    "def simulate(nTime, xa, Pa, M, H, Q, R):\n",
    "    \"\"\"Simulate synthetic truth (x) and observations (y).\"\"\"\n",
    "    x = xa + np.sqrt(Pa)*rnd.randn()        # Draw initial condition\n",
    "    truths = np.zeros(nTime)                # Allocate\n",
    "    obsrvs = np.zeros(nTime)                # Allocate\n",
    "    for k in range(nTime):                  # Loop in time\n",
    "        x = M * x + np.sqrt(Q)*rnd.randn()  # Dynamics\n",
    "        y = H * x + np.sqrt(R)*rnd.randn()  # Measurement\n",
    "        truths[k] = x                       # Assign\n",
    "        obsrvs[k] = y                       # Assign\n",
    "    return truths, obsrvs"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9122bd69",
   "metadata": {},
   "source": [
    "The following code plots the process. *You don't need to read or understand it*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fd20e5a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "@interact(seed=(1, 12), M=(0, 1.03, .01), nTime=(0, 100),\n",
    "             logR=(-9, 9), logR_bias=(-9, 9),\n",
    "             logQ=(-9, 9), logQ_bias=(-9, 9))\n",
    "def exprmt(seed=4, nTime=50, M=0.97, logR=1, logQ=1, analyses_only=False, logR_bias=0, logQ_bias=0):\n",
    "    R, Q, Q_bias, R_bias = 4.0**np.array([logR, logQ, logQ_bias, logR_bias])\n",
    "\n",
    "    rnd.seed(seed)\n",
    "    truths, obsrvs = simulate(nTime, xa, Pa, M, H, Q, R)\n",
    "\n",
    "    plt.figure(figsize=(9, 6))\n",
    "    kk = 1 + np.arange(nTime)\n",
    "    plt.plot(kk, truths, 'k' , label='True state ($x$)')\n",
    "    plt.plot(kk, obsrvs, 'g*', label='Noisy obs ($y$)', ms=9)\n",
    "\n",
    "    try:\n",
    "        estimates, variances = KF(nTime, xa, Pa, M, H, Q*Q_bias, R*R_bias, obsrvs)\n",
    "        if analyses_only:\n",
    "            plt.plot(kk, estimates[:, 1], label=r'Kalman$^a$ ± 1$\\sigma$')\n",
    "            plt.fill_between(kk, *cInterval(estimates[:, 1], variances[:, 1]), alpha=.2)\n",
    "        else:\n",
    "            kk2 = kk.repeat(2)\n",
    "            plt.plot(kk2, estimates.flatten(), label=r'Kalman ± 1$\\sigma$')\n",
    "            plt.fill_between(kk2, *cInterval(estimates, variances), alpha=.2)\n",
    "    except NameError:\n",
    "        pass\n",
    "\n",
    "    sigproc = {}\n",
    "    ### INSERT ANSWER TO EXC \"signal processing\" HERE ###\n",
    "    # sigproc['some method'] = ...\n",
    "    for method, estimate in sigproc.items():\n",
    "        plt.plot(kk[:len(estimate)], estimate, label=method)\n",
    "\n",
    "    plt.xlabel('Time index (k)')\n",
    "    plt.legend(loc='upper left')\n",
    "    plt.axhline(0, c='k', lw=1, ls='--')\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76a6c90a",
   "metadata": {},
   "source": [
    "**Exc – AR1 properties:** Answer the following.\n",
    "\n",
    "- What does `seed` control?\n",
    "- Explain what happens when `M=0`. Also consider $Q \\rightarrow 0$.  \n",
    "  Can you give a name to this `truth` process,\n",
    "  i.e. a link to the relevant Wikipedia page?  \n",
    "  What about when `M=1`?  \n",
    "  Describe the general nature of the process as `M` changes from 0 to 1.  \n",
    "  What about when `M>1`?  \n",
    "- What happens when $R \\rightarrow 0$ ?\n",
    "- What happens when $R \\rightarrow \\infty$ ?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eeb1548c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('AR1')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f8a3170",
   "metadata": {},
   "source": [
    "<a name=\"The-(univariate)-Kalman-filter-(KF)\"></a>\n",
    "\n",
    "## The (univariate) Kalman filter (KF)\n",
    "\n",
    "Now we have a random variable that evolves in time, that we can *pretend* is unknown,\n",
    "in order to estimate (or \"track\") it.\n",
    "From above,\n",
    "$p(x_0) = \\NormDist(x_0 | x^\\ta_0, P^\\ta_0)$ with given parameters.\n",
    "We also know that $x_k$ evolves according to eqn. (DynMod).\n",
    "Therefore, as shown in the [T2 exercise on algebra with random variables](T2%20-%20Gaussian%20distribution.ipynb#Exc-–-linear-algebra-with-random-variables)\n",
    "$p(x_1) = \\NormDist(x_1 | x^\\tf_1, P^\\tf_1)$, with\n",
    "$$\n",
    "\\begin{align}\n",
    "x^\\tf_k &= \\DynMod \\, x^\\ta_{k-1} \\tag{5} \\\\\n",
    "P^\\tf_k &= \\DynMod^2 \\, P^\\ta_{k-1} + Q \\tag{6}\n",
    "\\end{align}\n",
    "$$\n",
    "\n",
    "Formulae (5) and (6) are called the **forecast step** of the KF.\n",
    "But when $y_1$ becomes available (according to eqn. (ObsMod)),\n",
    "we can update/condition our estimate of $x_1$, i.e., compute the posterior,\n",
    "$p(x_1 | y_1) = \\NormDist(x_1 \\mid x^\\ta_1, P^\\ta_1)$,\n",
    "using the formulae we developed for Bayes' rule with\n",
    "[Gaussian distributions](T3%20-%20Bayesian%20inference.ipynb#Linear-Gaussian-Bayes'-rule-(1D)).\n",
    "\n",
    "$$\n",
    "\\begin{align}\n",
    "  P^\\ta_k &= 1/(1/P^\\tf_k + \\ObsMod^2/R) \\,, \\tag{7} \\\\\\\n",
    "  x^\\ta_k  &= P^\\ta_k (x^\\tf/P^\\tf_k + \\ObsMod y_k/R) \\,.  \\tag{8}\n",
    "\\end{align}\n",
    "$$\n",
    "\n",
    "This is called the **analysis step** of the KF.\n",
    "We call this the **analysis step** of the KF.\n",
    "We can subsequently apply the same two steps again\n",
    "to produce forecast and analysis estimates for the next time index, $k+1$.\n",
    "Note that if $k$ is a date index, then \"yesterday's forecast becomes today's prior\".\n",
    "\n",
    "In the case of linearity and Gaussianity,\n",
    "the KF of eqns. (5)-(8) computes the *exact* Bayesian pdfs for $x_k$.\n",
    "<a name=\"Bayesian-filtering-recursion\"></a>\n",
    "<details style=\"border: 1px solid #aaaaaa; border-radius: 4px; padding: 0.5em 0.5em 0;\">\n",
    "  <summary style=\"font-weight: normal; font-style: italic; margin: -0.5em -0.5em 0; padding: 0.5em;\">\n",
    "  But even without these assumptions,\n",
    "  a general (abstract) Bayesian recursive procedure can still be formulated  ... (optional reading 🔍)\n",
    "  </summary>\n",
    "\n",
    "  The following does relies only on the (\"hidden Markov model\") assumptions.\n",
    "\n",
    "  - The analysis \"assimilates\" $y_k$ according to Bayes' rule to compute $p(x_k | y_{1:k})$,\n",
    "  where $y_{1:k} = y_1, \\ldots, y_k$ is shorthand notation.\n",
    "  $$\n",
    "  p(x_k | y_{1:k}) \\propto p(y_k | x_k) \\, p(x_k | x_{1:k-1}) \\,.\n",
    "  $$\n",
    "  - The forecast \"propagates\" the uncertainty (i.e. density) according to the Chapman-Kolmogorov equation\n",
    "  to produce $p(x_{k+1}| y_{1:k})$.\n",
    "  $$\n",
    "  p(x_{k+1} | y_{1:k}) = \\int p(x_{k+1} | x_k) \\, p(x_k | y_{1:k}) \\, d x_k \\,.\n",
    "  $$\n",
    "\n",
    "  It is important to appreciate the benefits of the recursive form of these computations:\n",
    "  It reflects the recursiveness (Markov property) of nature:\n",
    "  Both in the problem and our solution, time $k+1$ *builds on* time $k$,\n",
    "  so we do not need to re-do the entire problem for each $k$.\n",
    "  At every time $k$, we only deal with functions of one or two variables: $x_k$ and $x_{k+1}$,\n",
    "  which is a much smaller space (for quantifying our densities or covariances)\n",
    "  than that of the joint pdf $p(x_{1:k} | y_{1:k})$.\n",
    "  Of course, this recursiveness also manifests in the special case of the Kalman filter above.\n",
    "\n",
    "  The above recursive procedure, called ***filtering***, always computes $p(x_l | y_{1:k})$ with $l \\geq k$.\n",
    "  I.e. a filtering estimate only builds on *past* information.\n",
    "  Of course, in the case of real-time forecast initialisations (for prediction),\n",
    "  future observations are not available,\n",
    "  but this is not so if the computations are carried out later.\n",
    "  For example, for climate hindcasts, or reanalyses,\n",
    "  the use of relatively-speaking \"future\" observations ($k > l$) is a possibility,\n",
    "  that should improve estimates by adding information,\n",
    "  and indeed a necessity (often neglected in climate reanalyses) for physical realism\n",
    "  (to avoid artificial jumps due to changing observational information content).\n",
    "  The associated computational problem and procedures are called ***smoothing***.\n",
    "  Recursive formulations are available, with ensemble formulations reviewed by [Raanes (2016)](#References).\n",
    "\n",
    "  - - -\n",
    "</details>\n",
    "\n",
    "#### Exc – Implementation\n",
    "\n",
    "Below is a very rudimentary sequential estimator (not the KF!), which essentially just does \"persistence\" forecasts and sets the analysis estimates to the value of the observations (*which is only generally possible in this linear, scalar case*). Run its cell to define it, and then re-run the above interactive animation cell. Then:\n",
    "\n",
    "- Implement the KF properly by replacing the forecast and analysis steps below. *Re-run the cell.*\n",
    "- Try implementing the analysis step both in the \"precision\" and \"gain\" forms."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1864c6ed",
   "metadata": {},
   "outputs": [],
   "source": [
    "def KF(nTime, xa, Pa, M, H, Q, R, obsrvs):\n",
    "    \"\"\"Kalman filter. PS: (xa, Pa) should be input with *initial* values.\"\"\"\n",
    "    ############################\n",
    "    # TEMPORARY IMPLEMENTATION #\n",
    "    ############################\n",
    "    estimates = np.zeros((nTime, 2))\n",
    "    variances = np.zeros((nTime, 2))\n",
    "    for k in range(nTime):\n",
    "        # Forecast step\n",
    "        xf = xa\n",
    "        Pf = Pa\n",
    "        # Analysis update step\n",
    "        Pa = R / H**2\n",
    "        xa = obsrvs[k] / H\n",
    "        # Assign\n",
    "        estimates[k] = xf, xa\n",
    "        variances[k] = Pf, Pa\n",
    "    return estimates, variances"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ee23d106",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('KF1 code')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6dbaa4bf",
   "metadata": {},
   "source": [
    "#### Exc – KF behaviour\n",
    "\n",
    "- Set `logQ` to its minimum, and `M=1`.  \n",
    "  We established in Exc \"AR1\" that the true states are now constant in time (but unknown).  \n",
    "  How does the KF fare in estimating it?  \n",
    "  Does its uncertainty variance ever reach 0?\n",
    "- What is the KF uncertainty variance in the case of `M=0`?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "33843343",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('KF behaviour')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "350ae0cd",
   "metadata": {},
   "source": [
    "<a name=\"Exc-–-Temporal-convergence\"></a>\n",
    "\n",
    "#### Exc – Temporal convergence\n",
    "\n",
    "In general, $\\DynMod$, $\\ObsMod$, $Q$, and $R$ depend on time, $k$\n",
    "(often to parameterize exogenous/outside factors/forces/conditions),\n",
    "and there are no limit values that the KF parameters converge to.\n",
    "But, we assumed that they are all stationary.\n",
    "In addition, suppose $Q=0$ and $\\ObsMod = 1$.\n",
    "Show that\n",
    "\n",
    "- (a) $1/P^\\ta_k = 1/(\\DynMod^2 P^\\ta_{k-1}) + 1/R$,\n",
    "  by combining the forecast and analysis equations for the variance.\n",
    "- (b) $1/P^\\ta_k = 1/P^\\ta_0 + k/R$, if $\\DynMod = 1$.\n",
    "- (c) $P^\\ta_{\\infty} = 0$, if $\\DynMod = 1$.\n",
    "- (d) $P^\\ta_{\\infty} = 0$, if $\\DynMod < 1$.  \n",
    "- (e) $P^\\ta_{\\infty} = R (1-1/\\DynMod^2)$, if $\\DynMod > 1$.  \n",
    "  *Hint: Look for the fixed point of the recursion of part (a).*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eba5c579",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('Asymptotic Riccati', 'a')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "82c323a2",
   "metadata": {},
   "source": [
    "**Exc (optional) – Temporal CV, part 2:**\n",
    "Now we don't assume that $Q$ is zero. Instead\n",
    "\n",
    "- (a) Suppose $\\DynMod = 0$. What does $P^\\ta_k$ equal?\n",
    "- (b) Suppose $\\DynMod = 1$. Show that $P^\\ta_\\infty$\n",
    "  satisfies the quadratic equation: $0 = P^2 + Q P - Q R$.  \n",
    "  Thereby, without solving the quadratic equation, show that\n",
    "  - (c) $P^\\ta_\\infty \\rightarrow R$ (from below) if $Q \\rightarrow +\\infty$.\n",
    "  - (d) $P^\\ta_\\infty \\rightarrow \\sqrt{ Q R}$ (from above) if $Q \\rightarrow 0^+$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "55d6fd6c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('Asymptotes when Q>0')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8173122",
   "metadata": {},
   "source": [
    "#### Exc (optional) – Analytic simplification in the case of an unknown constant\n",
    "\n",
    "- Note that in case $Q = 0$,\n",
    "then $x_{k+1} = \\DynMod^k x_0$.  \n",
    "- So if $\\DynMod = 1$, then $x_k = x_0$, so we are estimating an unknown *constant*,\n",
    "and can drop its time index subscript.  \n",
    "- For simplicity, assume $\\ObsMod = 1$, and $P^a_0 \\rightarrow +\\infty$.  \n",
    "- Then $p(x | y_{1:k}) \\propto \\exp \\big\\{- \\sum_l \\| y_l - x \\|^2_R / 2 \\big\\}\n",
    "= \\NormDist(x | \\bar{y}, R/k )$, which again follows by completing the square.  \n",
    "- In words, the (accumulated) posterior mean is the sample average,\n",
    "  $\\bar{y} = \\frac{1}{k}\\sum_l y_l$,  \n",
    "  and the variance is that of a single observation divided by $k$.\n",
    "\n",
    "Show that this is the same posterior that the KF recursions produce.  \n",
    "*Hint: while this is straightforward for the variance,\n",
    "you will probably want to prove the mean using induction.*\n",
    "\n",
    "#### Exc – Impact of biases\n",
    "\n",
    "Re-run the above interactive animation to set the default control values. Answer the following\n",
    "\n",
    "- `logR_bias`/`logQ_bias` control the (multiplicative) bias in $R$/$Q$ that is fed to the KF.\n",
    "  What happens when the KF \"thinks\" the measurement/dynamical error\n",
    "  is (much) smaller than it actually is?\n",
    "  What about larger?\n",
    "- Re-run the animation to get default values.\n",
    "  Set `logQ` to 0, which will make the following behaviour easier to describe.\n",
    "  In the code, add 20 to the initial `xa` **given to the KF**.\n",
    "  How long does it take for it to recover from this initial bias?\n",
    "- Multiply `Pa` **given to the KF** by 0.01. What about now?\n",
    "- Remove the previous biases.\n",
    "  Instead, multiply `M` **given to the KF** by 2, and observe what happens.\n",
    "  Try the same, but dividing `M` by 2."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "99621a9a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('KF with bias')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3594eff0",
   "metadata": {},
   "source": [
    "## Alternative methods\n",
    "\n",
    "When it comes to (especially univariate) time series analysis,\n",
    "the Kalman filter (KF) is not the only option.\n",
    "For example, **signal processing** offers several alternative filters.\n",
    "Indeed, the word \"filter\" in the KF comes from that domain,\n",
    "where it originally referred to removing high-frequency noise,\n",
    "since this often leads to a better estimate of the signal.\n",
    "We will not review signal processing theory here,\n",
    "but challenge you to make use of what `scipy` already has to offer.\n",
    "\n",
    "#### Exc – signal processing\n",
    "\n",
    "Run the following cell to import and define some more tools."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "474ff892",
   "metadata": {},
   "outputs": [],
   "source": [
    "import scipy as sp\n",
    "import scipy.signal as sig\n",
    "def nrmlz(x):\n",
    "    return x / x.sum()\n",
    "def trunc(x, n):\n",
    "    return np.pad(x[:n], (0, len(x)-n))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e1cc96d",
   "metadata": {},
   "source": [
    "Now try to \"filter\" the `obsrvs` to produce estimates of `truth`.\n",
    "For each method, add your estimate (\"filtered signal\" in signal processing parlance)\n",
    "to the `sigproc` dictionary in the interactive animation cell,\n",
    "using an appropriate name/key (this will automatically include it in the plot).\n",
    "Use\n",
    "\n",
    "- (a) [`sig.wiener`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.wiener.html).  \n",
    "  *PS: this is a direct ancestor of the KF*.\n",
    "- (b) a moving average, for example [`sig.windows.hamming`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.hamming.html).  \n",
    "  *Hint: you may also want to use [`sig.convolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve.html#scipy.signal.convolve)*.\n",
    "- (c) a low-pass filter using [`np.fft`](https://docs.scipy.org/doc/scipy/reference/fft.html#).  \n",
    "  *Hint: you may also want to use the above `trunc` function.*\n",
    "- (d) The [`sig.butter`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.butter.html) filter.\n",
    "  *Hint: apply with [`sig.filtfilt`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.filtfilt.html).*\n",
    "- (e) not really a signal processing method: [`sp.interpolate.UniveriateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html)\n",
    "\n",
    "The answers should be considered examples, not the uniquely right way."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3e21cf83",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('signal processing', 'a')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb797a4c",
   "metadata": {},
   "source": [
    "But for the above problem (which is linear-Gaussian!),\n",
    "the KF is guaranteed (on average, in the long run, in terms of mean square error)\n",
    "to outperform any other method.\n",
    "We will see cases later (in full-blown state estimation)\n",
    "where the difference is much clearer,\n",
    "and indeed it might not even be clear how to apply signal processing methods.\n",
    "However, the KF has an unfair advantage: we are giving it a lot of information\n",
    "about the problem (`M, H, R, Q`) that the signal processing methods do not have.\n",
    "Therefore, those methods typically require a good deal of tuning\n",
    "(but in practice, so does the KF, since `Q` and `R` are rarely well determined).\n",
    "\n",
    "## Summary\n",
    "\n",
    "The Kalman filter (KF) can be derived by applying linear-Gaussian assumptions\n",
    "to a sequential inference problem.\n",
    "Generally, the uncertainty never converges to 0,\n",
    "and the performance of the filter depends entirely on\n",
    "accurate system parameters (models and error covariance matrices).\n",
    "\n",
    "As a subset of state estimation (i.e., the KF), we can do classical time series estimation\n",
    "[(wherein state-estimation is called the state-space approach)](https://www.google.co.uk/search?q=%22We+now+demonstrate+how+to+put+these+models+into+state+space+form%22&btnG=Search+Books&tbm=bks).\n",
    "Moreover, DA methods produce uncertainty quantification, which is usually more obscure with time series analysis methods.\n",
    "\n",
    "### Next: [T5 - Multivariate Kalman filter](T5%20-%20Multivariate%20Kalman%20filter.ipynb)\n",
    "\n",
    "<a name=\"References\"></a>\n",
    "\n",
    "### References\n",
    "\n",
    "<!--\n",
    "@article{raanes2015rts,\n",
    "    author = {Raanes, Patrick Nima},\n",
    "    title = {On the ensemble {R}auch-{T}ung-{S}triebel smoother and its equivalence to the ensemble {K}alman smoother},\n",
    "    file={~/P/Refs/articles/raanes2015rts.pdf},\n",
    "    doi={10.1002/qj.2728},\n",
    "    journal = {Quarterly Journal of the Royal Meteorological Society},\n",
    "    volume = {142},\n",
    "    number = {696},\n",
    "    pages = {1259--1264},\n",
    "    year = {2016}\n",
    "}\n",
    "-->\n",
    "\n",
    "- **Raanes (2016)**:\n",
    "  Patrick N. Raanes, \"On the ensemble Rauch-Tung-Striebel smoother and its equivalence to the ensemble Kalman smoother\", Quarterly Journal of the Royal Meteorological Society, 2016."
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "jupytext": {
   "formats": "ipynb,nb_mirrors//py:light,nb_mirrors//md"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
