{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "65cc65f1",
   "metadata": {},
   "outputs": [],
   "source": [
    "remote = \"https://raw.githubusercontent.com/nansencenter/DA-tutorials\"\n",
    "!wget -qO- {remote}/master/notebooks/resources/colab_bootstrap.sh | bash -s"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d6824aca",
   "metadata": {},
   "outputs": [],
   "source": [
    "from resources import show_answer, interact, import_from_nb\n",
    "%matplotlib inline\n",
    "import numpy as np\n",
    "import matplotlib as mpl\n",
    "import scipy.stats as ss\n",
    "import numpy.random as rnd\n",
    "import matplotlib.pyplot as plt\n",
    "from scipy.stats import gaussian_kde\n",
    "plt.ion();"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bab674f1",
   "metadata": {},
   "outputs": [],
   "source": [
    "(pdf_G1, grid1d, sample_GM) = import_from_nb(\"T2\", (\"pdf_G1\", \"grid1d\", \"sample_GM\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73d1d8a1",
   "metadata": {},
   "source": [
    "In [T5](T5%20-%20Multivariate%20Kalman%20filter.ipynb#Exc-–-The-%22Gain%22-form-of-the-KF) we derived the classical Kalman filter (KF),\n",
    "$\n",
    "\\newcommand{\\Expect}[0]{\\mathbb{E}}\n",
    "\\newcommand{\\NormDist}{\\mathscr{N}}\n",
    "\\newcommand{\\DynMod}[0]{\\mathscr{M}}\n",
    "\\newcommand{\\ObsMod}[0]{\\mathscr{H}}\n",
    "\\newcommand{\\mat}[1]{{\\mathbf{{#1}}}}\n",
    "\\newcommand{\\vect}[1]{{\\mathbf{#1}}}\n",
    "\\newcommand{\\trsign}{{\\mathsf{T}}}\n",
    "\\newcommand{\\tr}{^{\\trsign}}\n",
    "\\newcommand{\\ceq}[0]{\\mathrel{≔}}\n",
    "\\newcommand{\\xDim}[0]{D}\n",
    "\\newcommand{\\ta}[0]{\\text{a}}\n",
    "\\newcommand{\\tf}[0]{\\text{f}}\n",
    "\\newcommand{\\I}[0]{\\mat{I}}\n",
    "\\newcommand{\\X}[0]{\\mat{X}}\n",
    "\\newcommand{\\Y}[0]{\\mat{Y}}\n",
    "\\newcommand{\\E}[0]{\\mat{E}}\n",
    "\\newcommand{\\x}[0]{\\vect{x}}\n",
    "\\newcommand{\\y}[0]{\\vect{y}}\n",
    "\\newcommand{\\z}[0]{\\vect{z}}\n",
    "\\newcommand{\\bx}[0]{\\vect{\\bar{x}}}\n",
    "\\newcommand{\\by}[0]{\\vect{\\bar{y}}}\n",
    "\\newcommand{\\bP}[0]{\\mat{P}}\n",
    "\\newcommand{\\barC}[0]{\\mat{\\bar{C}}}\n",
    "\\newcommand{\\ones}[0]{\\vect{1}}\n",
    "\\newcommand{\\AN}[0]{\\big( \\I_N - \\ones \\ones\\tr / N \\big)}\n",
    "\\newcommand{\\diff}[0]{\\mathrm{d}}\n",
    "\\newcommand{\\Reals}{\\mathbb{R}}\n",
    "$\n",
    "wherein the dynamics (and measurements) are assumed linear,\n",
    "i.e. $\\DynMod, \\ObsMod$ are matrices.\n",
    "Furthermore, two different forms were derived,\n",
    "whose efficiency depends on the relative size of the covariance matrices involved.\n",
    "But [T6](T6%20-%20Chaos%20%26%20Lorenz%20[optional].ipynb)\n",
    "illustrated several *non-linear* dynamical systems\n",
    "that we would like to be able track (estimate).\n",
    "The classical approach to handle non-linearity\n",
    "is called the *extended* KF (**EKF**), and its \"derivation\" is straightforward:\n",
    "replace $\\DynMod \\x^\\ta$ by $\\DynMod(\\x^\\ta)$,\n",
    "and $\\DynMod \\, \\bP^\\ta$ by $\\frac{\\partial \\DynMod}{\\partial \\x}(\\x^\\ta) \\, \\bP^\\ta$\n",
    "(the Jacobian can also be seen as the integrated TLM of [T6](T6%20-%20Chaos%20%26%20Lorenz%20[optional].ipynb#Error/perturbation-propagation))\n",
    "and do likewise for $\\ObsMod$ with $\\x^f$ and $\\bP^f$.\n",
    "The EKF is widely used in engineering,\n",
    "but for the class of problems generally found in geoscience,\n",
    "\n",
    "- the TLM linearisation is sometimes too inaccurate (or insufficiently robust to the uncertainty),\n",
    "and the process of deriving and coding up the TLM too arduous\n",
    "(several PhD years, unless auto-differentiable frameworks have been used)\n",
    "or downright illegal (proprietary software).\n",
    "- the size of the covariances $\\bP^{\\tf / \\ta}$ is simply too large to keep in memory,\n",
    "  as highlighted in [T7](T7%20-%20Geostats%20%26%20Kriging%20%5Boptional%5D.ipynb).\n",
    "\n",
    "Therefore, another approach is needed...\n",
    "\n",
    "# T8 - Monte-Carlo & cov. estimation\n",
    "\n",
    "**Monte-Carlo (M-C) methods** are a class of computational algorithms that rely on random/stochastic sampling.\n",
    "They generally trade off higher (though random!) error for lower technical complexity [<sup>[1]</sup>](#Footnote-1:).\n",
    "Examples from optimisation include randomly choosing search directions, swarms,\n",
    "evolutionary mutations, or perturbations for gradient approximation.\n",
    "But the main application area is the computation of (deterministic) integrals via sample averages,\n",
    "which is rooted in the fact that any integral can be formulated as expectations,\n",
    "combined with the law of large numbers ([LLN](T2%20-%20Gaussian%20distribution.ipynb#Probability-essentials)).\n",
    "Thus M-C methods apply to surprisingly large class of problems, including for\n",
    "example a way to [inefficiently approximate the value of $\\pi$](https://en.wikipedia.org/wiki/Monte_Carlo_method#Overview).\n",
    "Indeed, many of the integrals of interest are inherently expectations,\n",
    "in particular the forecast distribution. Its [integral](T4%20-%20Time%20series%20filtering.ipynb#Bayesian-filtering-recursion)\n",
    "is intractable, due to the non-trivial nature of the generating process.\n",
    "However, a Monte-Carlo sample of the forecast distribution\n",
    "can be generated simply by repeated simulation of eqn. (DynMod),\n",
    "constituting the forecast step of the ensemble Kalman filter (**EnKF**).\n",
    "Meanwhile, its analysis update is obtained by replacing\n",
    "$\\ObsMod \\x^\\tf$ and $\\ObsMod \\, \\bP^\\tf$ by the appropriate ensemble moments/statistics[<sup>2</sup>](#Footnote-2:).\n",
    "Outside of the linear-Gaussian case, this swap is an approximation,\n",
    "but the computational cost and/or accuracy may be improved compared with the EKF.\n",
    "The EnKF will be developed in full later;\n",
    "at present, our focus is on the use of a sample\n",
    "to reconstruct, estimate, or represent the underlying distribution.\n",
    "If it is assumed Gaussian, this mostly comes down to the estimation of its covariance matrix.\n",
    "\n",
    "### Moment estimation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8e6d7aa0",
   "metadata": {},
   "outputs": [],
   "source": [
    "def estimate_mean_and_cov(E):\n",
    "    #### REPLACE WITH YOUR IMPLEMENTATION ####\n",
    "    x_bar = np.mean(E, axis=1)\n",
    "    C_bar = np.cov(E)\n",
    "    return x_bar, C_bar"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d463cec",
   "metadata": {},
   "source": [
    "**Exc – barbar implementation:** Above, we've used numpy's (`np`) functions\n",
    "to estimate the mean and covariance, $\\bx$ and $\\barC$,\n",
    "from the ensemble matrix $\\E = \\begin{bmatrix} x_{1},& x_{2}, \\ldots x_{N} \\end{bmatrix}$:\n",
    "Now, instead, implement these estimators yourself:\n",
    "$$\\begin{align}\\bx &\\ceq \\frac{1}{N}   \\sum_{n=1}^N \\x_n \\,, \\\\\n",
    "   \\barC &\\ceq \\frac{1}{N-1} \\sum_{n=1}^N (\\x_n - \\bx) (\\x_n - \\bx)^T \\,. \\end{align}$$\n",
    "Use a `for` loop, but don't use numpy's `mean`, `cov`.\n",
    "*Hint: it's convenient to start by allocation: `x_bar = np.zeros(...)`*\n",
    "\n",
    "The following prints some numbers that can be used to check if you got it right.\n",
    "Note that the estimates will never be exact:\n",
    "they contain some amount of random error, a.k.a. ***sampling error***."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8576e109",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Draw\n",
    "N = 80\n",
    "mu = np.array([1, 100, 5])\n",
    "L = np.diag([1, 2, 3]) # ⇒ C = diag([1, 4, 9, ...])\n",
    "E = sample_GM(mu, L=L, N=N)\n",
    "\n",
    "x_bar, C_bar = estimate_mean_and_cov(E)\n",
    "\n",
    "with np.printoptions(precision=1, suppress=True):\n",
    "    print(\"Estimated mean =\", x_bar)\n",
    "    print(\"Estimated cov =\", C_bar, sep=\"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "641f7699",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "# show_answer('ensemble moments, loop')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "145d7a37",
   "metadata": {},
   "source": [
    "**Exc – representation and vectorization**\n",
    "Denote the *centered* ensemble matrix\n",
    "$\\X \\ceq \\begin{bmatrix} \\x_1 -\\bx, & \\ldots & \\x_N -\\bx \\end{bmatrix} \\,.$\n",
    "\n",
    "- (a): Show that $\\X = \\E \\AN$, where $\\ones$ is the column vector of length $N$ with all elements equal to $1$.  \n",
    "  *Hint: consider column $n$ of $\\X$.*  \n",
    "  *PS: it can be shown that $\\ones \\ones\\tr / N$ and its complement is a \"projection matrix\".*\n",
    "- (b): Python (numpy) is quicker if you \"vectorize\" loops (similar to Matlab and other high-level languages).\n",
    "  This is eminently possible with computations of ensemble moments.\n",
    "  Show that $$\\barC = \\X \\X^T /(N-1) \\,.$$\n",
    "- (c) *Optional*: But why don't we try to estimate the \"square root\" (Cholesky factor), `L`, instead of $\\mat{C}$ ?\n",
    "- (d): What is the memory requirement of $\\X$ vs $\\barC$?\n",
    "- (e): Code up this latest formula for $\\barC$ and insert it in `estimate_mean_and_cov(E)`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "50031673",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('ensemble moments vectorized', 'a')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc93e4c4",
   "metadata": {},
   "source": [
    "**Exc – cross-cov:** The cross-covariance between two random vectors, $\\bx$ and $\\by$, is given by\n",
    "$$\\begin{align}\n",
    "\\barC_{\\x,\\y}\n",
    "&\\ceq \\frac{1}{N-1} \\sum_{n=1}^N\n",
    "(\\x_n - \\bx) (\\y_n - \\by)^T \\\\\\\n",
    "&= \\X \\Y^T /(N-1)\n",
    "\\end{align}$$\n",
    "where $\\Y$ is, similar to $\\X$, the matrix whose columns are $\\y_n - \\by$ for $n=1,\\ldots,N$.  \n",
    "Note that this is simply the covariance formula, but for two different variables,\n",
    "i.e., if $\\Y = \\X$, then $\\barC_{\\x,\\y} = \\barC_{\\x}$ (denoted $\\barC$ above).\n",
    "\n",
    "The code below uses `np.cov` to compute the cross-covariance (in a wasteful manner).\n",
    "Now, instead, implement the above formula yourself:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0b1f9eef",
   "metadata": {},
   "outputs": [],
   "source": [
    "def estimate_cross_cov(Ex, Ey):\n",
    "    xDim = len(Ex)\n",
    "    Cxy = np.cov(Ex, Ey) # cov of (X,Y) jointly\n",
    "    Cxy = Cxy[:xDim, xDim:]\n",
    "    return Cxy\n",
    "\n",
    "Ey = 3 * E + 44444\n",
    "Cxy_bar = estimate_cross_cov(E, Ey)\n",
    "\n",
    "with np.printoptions(precision=1, suppress=True):\n",
    "    print(\"Estimated cross cov =\", Cxy_bar, sep=\"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6617a4ce",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "# show_answer('estimate cross')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0d58d707",
   "metadata": {},
   "source": [
    "### Estimation errors\n",
    "\n",
    "It can be shown that the above estimators for the mean and the covariance are *consistent and unbiased*[<sup>3</sup>](#Footnote-3:).\n",
    "***Consistent*** means that the error vanishes as $N \\rightarrow \\infty$.\n",
    "***Unbiased*** means that if we repeat the estimation experiment many times (but use a fixed, finite $N$),\n",
    "then the average of sampling errors will also vanish.\n",
    "Under relatively mild regularity conditions, the [absence of bias implies consistency](https://en.wikipedia.org/wiki/Consistent_estimator#Bias_versus_consistency).\n",
    "\n",
    "The following computes a large number ($K$) of $\\barC$ and $1/\\barC$, estimated with a given ensemble size ($N$).\n",
    "Note that the true variance is $C = 1$.\n",
    "The histograms of the estimates is plotted, along with vertical lines displaying the mean values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "49dcfaf4",
   "metadata": {},
   "outputs": [],
   "source": [
    "K = 10000\n",
    "@interact(N=(2, 30), bottom=True)\n",
    "def var_and_precision_estimates(N=4):\n",
    "    E = rnd.randn(K, N)\n",
    "    estims = np.var(E, ddof=1, axis=-1)\n",
    "    bins = np.linspace(0, 6, 40)\n",
    "    plt.figure()\n",
    "    plt.hist(estims,   bins, alpha=.6, density=1)\n",
    "    plt.hist(1/estims, bins, alpha=.6, density=1)\n",
    "    plt.axvline(np.mean(estims),   color=\"C0\", label=\"C\")\n",
    "    plt.axvline(np.mean(1/estims), color=\"C1\", label=\"1/C\")\n",
    "    plt.legend()\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "01183f8c",
   "metadata": {},
   "source": [
    "**Exc – There's bias, and then there's bias:**\n",
    "\n",
    "- Note that $1/\\barC$ does not appear to be an unbiased estimate of $1/C = 1$.  \n",
    "  Explain this by referring to a well-known property of the expectation, $\\Expect$.  \n",
    "- What, roughly, is the dependence of the mean values (vertical lines) on the ensemble size?  \n",
    "  What do they tend to as $N$ goes to $0$?  \n",
    "  What about $+\\infty$ ?\n",
    "- *Optional*: What are the theoretical distributions of $\\barC$ and $1/\\barC$ ?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "79386214",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('variance estimate statistics')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "706380a5",
   "metadata": {},
   "source": [
    "**Exc (optional) – Error notions:**\n",
    "\n",
    "- (a). What's the difference between error and residual?\n",
    "- (b). What's the difference between error and bias?\n",
    "- (c). Show that mean-square-error (MSE) = Bias${}^2$ + Var.  \n",
    "  *Hint: start by writing down the definitions of error, bias, and variance (of $\\widehat{\\theta}$).*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "476230ec",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('errors')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb73e82b",
   "metadata": {},
   "source": [
    "### Ensemble *representation*\n",
    "\n",
    "We have seen that a sample can be used to estimate the underlying mean and covariance.\n",
    "Indeed, it can be used to estimate any statistic (expected value) of (wrt.) the distribution.\n",
    "Another way of stating the same point is that the ensemble can be used to *reconstruct* the underlying distribution.\n",
    "Indeed, as we have repeatedly seen since T2, a Gaussian distribution can be\n",
    "described ('parametrized') only through its first two moments,\n",
    "whereupon the density can be computed through the familiar eqn. (GM).\n",
    "Another reconstruction that should be familiar to you is that of histograms.\n",
    "Of course, their step-like nature can be off-putting,\n",
    "and therefore we should also consider their continuous counterpart,\n",
    "namely kernel density estimation (KDE).\n",
    "\n",
    "These methods are illustrated in the widget below.\n",
    "Note that the sample/ensemble gets generated via `randn`,\n",
    "which samples $\\NormDist(0, 1)$, and plotted as thin narrow lines."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ab83c218",
   "metadata": {},
   "outputs": [],
   "source": [
    "mu = 0\n",
    "sigma2 = 25\n",
    "\n",
    "@interact(              seed=(1, 10), nbins=(2, 60), bw=(0.1, 1))\n",
    "def pdf_reconstructions(seed=5,       nbins=10,      bw=.3):\n",
    "    rnd.seed(seed)\n",
    "    E = mu + np.sqrt(sigma2)*rnd.randn(N)\n",
    "\n",
    "    fig, ax = plt.subplots()\n",
    "    ax.plot(grid1d, pdf_G1(grid1d, mu, sigma2), lw=5,                      label=\"True\")\n",
    "    ax.plot(E, np.zeros(N), '|k', ms=100, mew=.4,                          label=\"_raw ens\")\n",
    "    ax.hist(E, nbins, density=1, alpha=.7, color=\"C5\",                     label=\"Histogram\")\n",
    "    ax.plot(grid1d, pdf_G1(grid1d, np.mean(E), np.var(E)), lw=5,           label=\"Parametric\")\n",
    "    ax.plot(grid1d, gaussian_kde(E.ravel(), bw**2).evaluate(grid1d), lw=5, label=\"KDE\")\n",
    "    ax.set_ylim(top=(3*sigma2)**-.5)\n",
    "    ax.legend()\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8510393d",
   "metadata": {},
   "source": [
    "**Exc – A matter of taste?:**\n",
    "\n",
    "- Which approximation to the true pdf looks better to your eyes?\n",
    "- Which approximation starts with more information?  \n",
    "  What is the downside of making such assumptions?\n",
    "- What value of `bw` causes the \"KDE\" method to most closely\n",
    "  reproduce/recover the \"Parametric\" method? What about `nbins`?  \n",
    "\n",
    "Thus, an ensemble can be used to characterize uncertainty:\n",
    "either by using it to compute (estimate) *statistics* thereof, such as the mean, median,\n",
    "variance, covariance, skewness, confidence intervals, etc\n",
    "(any function of the ensemble can be seen as a \"statistic\"),\n",
    "or by using it to reconstruct the distribution/density from which it is sampled,\n",
    "as illustrated by the widget above.\n",
    "\n",
    "### What about the linearisation?\n",
    "\n",
    "**Exc – The ensemble switcheroo:**\n",
    "Show that, if the covariance matrix $\\bP^\\tf$ is replaced by its estimate based on $\\E^\\tf$, say $\\barC_{\\vect{x}}$,\n",
    "then, in the linear case, $\\ObsMod \\, \\bP^\\tf$ equals\n",
    "the cross covariance estimated from $\\ObsMod \\, \\E^\\tf$ and $\\E^\\tf$, say $\\barC_{\\vect{y},\\vect{x}}$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "29ef26e2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# show_answer('associativity')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4218b84c",
   "metadata": {},
   "source": [
    "Now, $\\ObsMod \\, \\bP^\\tf$ figures in the KF,\n",
    "but is not applicable in the non-linear (and hence non-Gaussian).\n",
    "On the other hand, $\\barC_{\\vect{y},\\vect{x}}$ is straightforward to compute,\n",
    "which is what the EnKF relies on.\n",
    "But what can be said about the way non-linearity is handled by the EnKF?\n",
    "Let $\\vect{x} \\sim \\NormDist(\\vect{\\mu}, \\mat{C}_{\\vect{x}})$,\n",
    "i.e. $p(\\vect{x}) \\propto e^{ - \\| \\vect{x} - \\vect{\\mu} \\|^2_{\\mat{C}_{\\vect{x}}} / 2 }$,\n",
    "and consider the average (w.r.t. $\\vect{x}$) of the analytic (exact) gradient:\n",
    "$$\n",
    "\\require{cancel}\n",
    "\\begin{align}\n",
    "    \\Expect \\nabla\\! \\ObsMod(\\vect{x})\n",
    "    &:=\n",
    "    \\, \\int_{\\Reals^d} \\nabla\\! \\ObsMod(\\vect{x}) \\, p(\\vect{x}) \\diff \\vect{x} \\\\\n",
    "    &=\n",
    "    - \\int_{\\Reals^d} \\ObsMod(\\vect{x}) \\, [\\nabla p(\\vect{x})]\\tr \\diff \\vect{x}\n",
    "    \\;+\\; \\xcancel{\\bigl[\\ObsMod(\\vect{x}) \\, p(\\vect{x}) \\, \\vect{n} \\bigr]_{\\partial \\Reals^d}}\n",
    "\\end{align}\n",
    "$$\n",
    "by integration by parts, and $p(\\vect{x}) \\xrightarrow[\\partial \\Reals^d]{} 0$.\n",
    "But\n",
    "$$\n",
    "\\begin{align*}\n",
    "    \\nabla p(\\vect{x})\n",
    "    &= p(\\vect{x}) \\, \\nabla\\! \\log p(\\vect{x}) &\\\\\n",
    "    &= p(\\vect{x}) \\, \\mat{C}_{\\vect{x}}^{-1} (\\vect{\\mu} - \\vect{x}) \\,.&\n",
    "\\end{align*}\n",
    "$$\n",
    "In summary, we obtain **Stein's lemma**:\n",
    "$$\n",
    "\\begin{align}\n",
    "    \\Expect \\nabla\\! \\ObsMod(\\vect{x})\n",
    "    &=\n",
    "    \\int_{\\Reals^d} \\ObsMod(\\vect{x}) \\, p(\\vect{x}) \\, (\\vect{x} - \\vect{\\mu})\\tr \\diff \\vect{x} \\;\\; \\mat{C}_{\\vect{x}}^{-1}\n",
    "    \\hspace{6.5em}\n",
    "    \\\\\n",
    "    &= \\mat{C}_{\\vect{y}, \\vect{x}}\\, \\mat{C}_{\\vect{x}}^{-1} \\,.\n",
    "\\end{align}\n",
    "$$\n",
    "Meanwhile, by Slutsky, $\n",
    "    % \\ensgrad\\! \\ObsMod\n",
    "    % =\n",
    "    \\bar{\\mat{C}}_{\\vect{y}, \\vect{x}} \\, \\bar{\\mat{C}}_{\\vect{x}}^{+}\n",
    "    \\xrightarrow[N \\rightarrow \\infty]{}\n",
    "    % \\Expect \\nabla\\! \\ObsMod(\\vect{x})\n",
    "    \\mat{C}_{\\vect{y}, \\vect{x}}\\, \\mat{C}_{\\vect{x}}^{-1}\n",
    "    .$\n",
    "Thus, using $\\bar{\\mat{C}}_{\\vect{y}, \\vect{x}}$\n",
    "in ensemble methods means using an average derivative [[Raanes (2019)](#References)],\n",
    "a claim which was long made somewhat heuristically,\n",
    "along with the view of ensemble as a set of large, random, finite difference perturbations.\n",
    "Alternatively, $\\mat{C}_{\\vect{y}, \\vect{x}}\\, \\mat{C}_{\\vect{x}}^{-1}$ can also be shown to be\n",
    "the analytic derivative – with respect to $\\mu$ – of the average of $\\ObsMod(\\vect{x})$ [[Stordal (2016)]](#References).\n",
    "\n",
    "Note that this derivation of the ensemble linearisation\n",
    "shows that errors (from different members) *cancel out*,\n",
    "and shows exactly the linearisation converges to,\n",
    "neither of which are present in any derivation starting with Taylor-series expansions in $\\ObsMod$.\n",
    "\n",
    "Of course, $\\mat{C}_{\\vect{y}, \\vect{x}}\\, \\mat{C}_{\\vect{x}}^{-1}$ can also be recognized as\n",
    "the formula for linear least-squares (LLS) regression,\n",
    "which enables a discussion of Gauss-Markov (BLUE) optimality.\n",
    "<details style=\"border: 1px solid #aaaaaa; border-radius: 4px; padding: 0.5em 0.5em 0;\">\n",
    "  <summary style=\"font-weight: normal; font-style: italic; margin: -0.5em -0.5em 0; padding: 0.5em;\">\n",
    "  The fact that LLS regression is used by the EnKF was recognized as early as [Anderson (2001)](#References)\n",
    "  (optional reading 🔍)\n",
    "  </summary>\n",
    "\n",
    "  although he employs it in the reverse direction (from $\\vect{y}$ to $\\vect{x}$).\n",
    "  LLS regression was also identified by Snyder (2012) for the Kalman gain as a whole,\n",
    "  and can even be identified as implicit in the ETKF approximation: $\\ObsMod(x) \\approx \\bar{y} + \\mat{Y} \\vect{w}$.\n",
    "  The strict equivalence between these approaches can be globally understood as a corollary\n",
    "  of the fact that the chain rule applies for LLS derivatives.\n",
    "\n",
    "  - - -\n",
    "</details>\n",
    "\n",
    "## Summary\n",
    "\n",
    "Monte-Carlo methods use random sampling to estimate expectations and distributions,\n",
    "making them powerful for complex or nonlinear problems.\n",
    "Ensembles – i.i.d. samples – allow us to estimate statistics and reconstruct distributions,\n",
    "with accuracy improving as the ensemble size grows.\n",
    "Parametric assumptions (e.g. assuming Gaussianity) can be useful in approximating distributions.\n",
    "Sample mean and covariance estimators are consistent and unbiased,\n",
    "but nonlinear functions of these (like the inverse covariance) may be biased.\n",
    "Vectorized computation of ensemble statistics is both efficient and essential for practical use.\n",
    "The ensemble approach naturally handles nonlinearity by simulating the full system,\n",
    "forming the basis for methods like the EnKF.\n",
    "\n",
    "### Next: [T9 - Writing your own EnKF](T9%20-%20Writing%20your%20own%20EnKF.ipynb)\n",
    "\n",
    "- - -\n",
    "\n",
    "- ###### Footnote 1:\n",
    "<a name=\"Footnote-1:\"></a>\n",
    "  Monte-Carlo is *easy to apply* for any domain of integration,\n",
    "  and its (pseudo) randomness means makes it robust against hard-to-foresee biases.\n",
    "  It is sometimes claimed that M-C somewhat escapes the curse of dimensionality because\n",
    "  – by the CLT or Chebyshev's inequality – the probabilistic error of the M-C approximation\n",
    "  asymptotically converges to zero at a rate proportional to $N^{-1/2}$,\n",
    "  regardless of the dimension of the integral, $\\xDim$\n",
    "  (whereas the absolute error of grid-based quadrature methods converges proportional to $N^{-k/\\xDim}$,\n",
    "  for some order $k$).\n",
    "  However, the \"starting\" coefficient of the M-C error is generally highly dependent on $\\xDim$,\n",
    "  and (in high dimensions) much more important than a theoretical asymptote.\n",
    "  Finally, the low-discrepancy sequences of **quasi** M-C [[Caflisch (1998)](#References)]\n",
    "  (arguably the middle-ground between quadrature and M-C)\n",
    "  usually provide convergence at a rate of $(\\log N)^\\xDim / N$\n",
    "  – a good deal faster than plain M-C –\n",
    "  which should dispel any notion that randomness is somehow the secret sauce for fast convergence.\n",
    "- ###### Footnote 2:\n",
    "<a name=\"Footnote-2:\"></a>\n",
    "  **An ensemble** is an *i.i.d.* **sample**.\n",
    "  Its \"members\" (\"particles\", \"realizations\", or \"sample points\") have supposedly been drawn (\"sampled\")\n",
    "  independently from the same distribution.\n",
    "  With the EnKF, these assumptions are generally tenuous, but pragmatic.\n",
    "\n",
    "  Another derivation consists in **hiding** away the non-linearity of $\\ObsMod$ by augmenting the state vector with the observations.\n",
    "  We do not favor this approach pedagogically, since it makes it even less clear just what approximations are being made due to the non-linearity.\n",
    "- ###### Footnote 3:\n",
    "<a name=\"Footnote-3:\"></a>\n",
    "  Why should $(N-1)$ and not simply $N$ be used to normalize the covariance estimate (for unbiasedness)?\n",
    "  Because the left hand side (LHS) of $\\sum_n (\\x_n - \\mu)^2 = N (\\bx - \\mu)^2 + \\sum_n (\\x_n - \\bx)^2$\n",
    "  is always larger than the RHS.\n",
    "  *PS: in practice, in DA, the use of $(N-1)$ is more of a convention than a requirement,\n",
    "  since its impact is attenuated by repeat cycling [[Raanes (2019)](#References)], as well as inflation and localisation.*\n",
    "\n",
    "<a name=\"References\"></a>\n",
    "\n",
    "### References\n",
    "\n",
    "<!--\n",
    "@article{raanes2019adaptive,\n",
    "    author = {Raanes, Patrick N. and Bocquet, Marc and Carrassi, Alberto},\n",
    "    title = {Adaptive covariance inflation in the ensemble {K}alman filter by {G}aussian scale mixtures},\n",
    "    file={~/P/Refs/articles/raanes2019adaptive.pdf},\n",
    "    doi={10.1002/qj.3386},\n",
    "    journal = {Quarterly Journal of the Royal Meteorological Society},\n",
    "    volume={145},\n",
    "    number={718},\n",
    "    pages={53--75},\n",
    "    year={2019},\n",
    "    publisher={Wiley Online Library}\n",
    "}\n",
    "\n",
    "@article{caflisch1998monte,\n",
    "  title={Monte Carlo and quasi-Monte Carlo methods},\n",
    "  author={Caflisch, Russel E.},\n",
    "  journal={Acta numerica},\n",
    "  volume={7},\n",
    "  pages={1--49},\n",
    "  year={1998},\n",
    "  publisher={Cambridge University Press}\n",
    "}\n",
    "\n",
    "@article{sakov2008implications,\n",
    "    title={Implications of the form of the ensemble transformation in the ensemble square root filters},\n",
    "    author={Sakov, Pavel and Oke, Peter R.},\n",
    "    file={~/P/Refs/articles/sakov2008implications.pdf},\n",
    "    journal={Monthly Weather Review},\n",
    "    volume={136},\n",
    "    number={3},\n",
    "    pages={1042--1053},\n",
    "    year={2008}\n",
    "}\n",
    "\n",
    "@article{ott2004local,\n",
    "    title={A local ensemble {K}alman filter for atmospheric data assimilation},\n",
    "    author={Ott, Edward and Hunt, Brian R. and Szunyogh, Istvan and Zimin, Aleksey V. and Kostelich, Eric J. and Corazza, Matteo and Kalnay, Eugenia and Patil, D. J. and Yorke, James A.},\n",
    "    file={~/P/Refs/articles/ott2004local.pdf},\n",
    "    journal={Tellus A},\n",
    "    volume={56},\n",
    "    number={5},\n",
    "    pages={415--428},\n",
    "    year={2004},\n",
    "    publisher={Wiley Online Library}\n",
    "}\n",
    "-->\n",
    "\n",
    "- **Raanes (2019)**:\n",
    "  Patrick N. Raanes, Marc Bocquet, and Alberto Carrassi,\n",
    "  \"Adaptive covariance inflation in the ensemble Kalman filter by Gaussian scale mixtures\",\n",
    "  Quarterly Journal of the Royal Meteorological Society, 2019.\n",
    "- **Caflisch (1998)**:\n",
    "  Russel E. Caflisch,\n",
    "  \"Monte Carlo and quasi-Monte Carlo methods\",\n",
    "  Acta Numerica, 1998.\n",
    "- **Sakov (2008)**:\n",
    "  Pavel Sakov and Peter R. Oke,\n",
    "  \"Implications of the form of the ensemble transformation in the ensemble square root filters\",\n",
    "  Monthly Weather Review, 2008.\n",
    "- **Ott (2004)**:\n",
    "  Edward Ott, Brian R. Hunt, Istvan Szunyogh, Aleksey V. Zimin, Eric J. Kostelich, Matteo Corazza, Eugenia Kalnay, D. J. Patil, and James A. Yorke,\n",
    "  \"A local ensemble Kalman filter for atmospheric data assimilation\",\n",
    "  Tellus A, 2004."
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "formats": "ipynb,nb_mirrors//py:light,nb_mirrors//md"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
