{
 "nbformat": 4,
 "nbformat_minor": 0,
 "metadata": {
  "colab": {
   "private_outputs": true,
   "provenance": []
  },
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/mikexcohen/Statistics_book/blob/main/stats_ch04_descriptives.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Modern statistics: Intuition, Math, Python, R\n",
    "## Mike X Cohen (sincxpress.com)\n",
    "#### https://www.amazon.com/dp/B0CQRGWGLY\n",
    "#### Code for chapter 4 (descriptives)\n",
    "\n",
    "---\n",
    "\n",
    "# About this code file:\n",
    "\n",
    "### This notebook will reproduce most of the figures in this chapter (some figures were made in Inkscape), and illustrate the statistical concepts explained in the text. The point of providing the code is not just for you to recreate the figures, but for you to modify, adapt, explore, and experiment with the code.\n",
    "\n",
    "### Solutions to all exercises are at the bottom of the notebook.\n",
    "\n",
    "#### This code was written in google-colab. The notebook may require some modifications if you use a different IDE."
   ],
   "metadata": {
    "id": "yeVh6hm2ezCO"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# import libraries and define global settings\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import seaborn as sns\n",
    "import scipy.stats as stats\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# define global figure properties used for publication\n",
    "import matplotlib_inline.backend_inline\n",
    "matplotlib_inline.backend_inline.set_matplotlib_formats('svg') # display figures in vector format\n",
    "plt.rcParams.update({'font.size':14,             # font size\n",
    "                     'savefig.dpi':300,          # output resolution\n",
    "                     'axes.titlelocation':'left',# title location\n",
    "                     'axes.spines.right':False,  # remove axis bounding box\n",
    "                     'axes.spines.top':False,    # remove axis bounding box\n",
    "                     })"
   ],
   "metadata": {
    "id": "Bcz2Oz9IAG2T"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "U8o84xpGAGzv"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.2: Gangnam style video watching data"
   ],
   "metadata": {
    "id": "wyWRMY4daiYx"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# generate some data\n",
    "timesWatched = np.round( np.abs(np.random.randn(500)*20) )/2\n",
    "\n",
    "# force an outliner\n",
    "timesWatched[300] = 35\n",
    "\n",
    "_,axs = plt.subplots(1,2,figsize=(10,4))\n",
    "\n",
    "axs[0].plot(timesWatched,'ks')\n",
    "axs[0].set_xlabel('Respondent index')\n",
    "axs[0].set_ylabel('Times watched')\n",
    "axs[0].set_title(r'$\\bf{A}$)  Visualized by respondent ID#')\n",
    "\n",
    "axs[1].hist(timesWatched,bins='fd',color='gray',edgecolor='k')\n",
    "axs[1].set_xlabel('Times watched')\n",
    "axs[1].set_ylabel('Count')\n",
    "axs[1].set_title(r'$\\bf{B}$)  Visualized as histogram')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_YT_visualize.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "ZeKOmDvYLbLk"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "6d_wb0O9aiIs"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.3: Two samples from the same population have similar distributions"
   ],
   "metadata": {
    "id": "hIOvjH7YUncZ"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "sampleA = np.random.randn(1500)*2 + np.pi**np.pi\n",
    "sampleB = np.random.randn(1500)*2 + np.pi**np.pi\n",
    "\n",
    "_,axs = plt.subplots(1,2,figsize=(10,4))\n",
    "\n",
    "axs[0].hist(sampleA,bins='fd',color='k',edgecolor='w')\n",
    "axs[0].set(xlabel='Data value',ylabel='Count',xlim=[30,45])\n",
    "axs[0].set_title(r'$\\bf{A}$)  Sample \"A\"')\n",
    "\n",
    "axs[1].hist(sampleB,bins='fd',color='k',edgecolor='w')\n",
    "axs[1].set(xlabel='Data value',ylabel='Count',xlim=[30,45])\n",
    "axs[1].set_title(r'$\\bf{B}$)  Sample \"B\"')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_rand_diffHists.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "FBp_qESFUnZi"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "4Pl69hN7-4Pq"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "r-iNnFKV5P3O"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.4: Analytical Gaussian"
   ],
   "metadata": {
    "id": "KV8Yp1fs-4lx"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# Code isn't shown here, because it's part of Exercise 1 :P"
   ],
   "metadata": {
    "id": "3WDJAQRs-7La"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "nlvu357DUnXA"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.5: Examples of distributions"
   ],
   "metadata": {
    "id": "wbiT6nL4TWQP"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "x = np.linspace(-5,5,10001)\n",
    "\n",
    "_,axs = plt.subplots(2,2,figsize=(10,7))\n",
    "\n",
    "# Gaussian\n",
    "y = stats.norm.pdf(x) # stats.norm.pdf(x*1.5-2.7) + stats.norm.pdf(x*1.5+2.7)\n",
    "axs[0,0].plot(x,y,'k',linewidth=3)\n",
    "axs[0,0].set_title(r'$\\bf{A}$)  Gaussian (\"bell curve\")')\n",
    "axs[0,0].set_xlim(x[[0,-1]])\n",
    "\n",
    "# T\n",
    "axs[0,1].plot(x,stats.t.pdf(x,20),'k',linewidth=3)\n",
    "axs[0,1].set_title(r'$\\bf{B}$)  t distribution (df=20)')\n",
    "axs[0,1].set_xlim(x[[0,-1]])\n",
    "\n",
    "# F\n",
    "x = np.linspace(0,10,10001)\n",
    "axs[1,0].plot(x,stats.f.pdf(x,5,100),'k',linewidth=3)\n",
    "axs[1,0].set_title(r'$\\bf{C}$)  F distribution (df=5,100)')\n",
    "axs[1,0].set_xlim(x[[0,-1]])\n",
    "\n",
    "# Chi\n",
    "axs[1,1].plot(x,stats.chi2.pdf(x,3),'k',linewidth=3)\n",
    "axs[1,1].set_title(r'$\\bf{D}$)  Chi-square distribution (df=3)')\n",
    "axs[1,1].set_xlim(x[[0,-1]])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_exampleDistributions.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "DlZtR-0-MPux"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "u99AlisQMP1Y"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.6: Examples of empirical distributions"
   ],
   "metadata": {
    "id": "yqaO02QNMP4A"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# bimodal\n",
    "_,axs = plt.subplots(2,2,figsize=(10,7))\n",
    "\n",
    "\n",
    "# normal distribution with kurtosis\n",
    "X = np.arctanh(np.random.rand(10000)*1.8-.9) + 1.5\n",
    "axs[0,0].hist(X,bins='fd',color=(.4,.4,.4),edgecolor='k')\n",
    "axs[0,0].set_xlabel('Data value')\n",
    "axs[0,0].set_ylabel('Bin count')\n",
    "axs[0,0].set_title(r'$\\bf{A}$)')\n",
    "\n",
    "\n",
    "# uniform distribution\n",
    "X = np.random.rand(1000)\n",
    "axs[0,1].hist(X,bins='fd',color=(.4,.4,.4),edgecolor='k')\n",
    "axs[0,1].set_xlabel('Data value')\n",
    "axs[0,1].set_ylabel('Bin count')\n",
    "axs[0,1].set_title(r'$\\bf{B}$)')\n",
    "\n",
    "\n",
    "\n",
    "# power distribution\n",
    "f = np.linspace(1,10,5001)\n",
    "X = 1/f + np.random.randn(len(f))/200\n",
    "X[X>.9] = .9 # some clipping\n",
    "axs[1,0].hist(X,bins='fd',color=(.4,.4,.4),edgecolor='k')\n",
    "axs[1,0].set_xlabel('Data value')\n",
    "axs[1,0].set_ylabel('Bin count')\n",
    "axs[1,0].set_title(r'$\\bf{C}$)')\n",
    "\n",
    "\n",
    "# bimodal distribution\n",
    "x1 = np.random.randn(500) - 2\n",
    "x2 = np.random.randn(2500) + 2\n",
    "X = np.concatenate((x1,x2))\n",
    "axs[1,1].hist(X,bins='fd',color=(.4,.4,.4),edgecolor='k')\n",
    "axs[1,1].set_xlabel('Data value')\n",
    "axs[1,1].set_ylabel('Bin count')\n",
    "axs[1,1].set_title(r'$\\bf{D}$)')\n",
    "\n",
    "\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_exampleEmpHists.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "QTCVXCIwPNv0"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "CoJ_y3D5TWTR"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.7: Characteristics of distributions"
   ],
   "metadata": {
    "id": "tVdSvKayaiCn"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# function variable\n",
    "x = np.linspace(-4,4,101)\n",
    "\n",
    "ss = [ 1,.3,1 ]\n",
    "cs = [ 0,0,-1 ]\n",
    "\n",
    "colors = [ 'k',(.7,.7,.7),(.4,.4,.4) ]\n",
    "styles = [ '-','--',':' ]\n",
    "\n",
    "plt.figure(figsize=(8,4))\n",
    "\n",
    "for s,c,col,sty in zip(ss,cs,colors,styles):\n",
    "\n",
    "  # create gaussian\n",
    "  gaus = np.exp( -(x-c)**2 / (2*s**2) )\n",
    "  plt.plot(x,gaus,color=col,linewidth=2,linestyle=sty)\n",
    "\n",
    "plt.legend(['Distr. 1','Distr. 2','Distr. 3'])\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_distr_chars.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "bmKLWTuRah_V"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "-3CujyajYfxc"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.9: Histogram showing mean"
   ],
   "metadata": {
    "id": "W_ssj4vkYf0g"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# generate a Laplace distribution\n",
    "x1 = np.exp(-np.abs(3*np.random.randn(4000)))\n",
    "x2 = np.exp(-np.abs(3*np.random.randn(4000)))\n",
    "x = x1-x2+1\n",
    "\n",
    "# and compute its mean\n",
    "xBar = np.mean(x)\n",
    "\n",
    "# histogram\n",
    "plt.figure(figsize=(6,6))\n",
    "plt.hist(x,bins='fd',color=(.7,.7,.7),edgecolor='k')\n",
    "\n",
    "# vertical line for mean\n",
    "plt.plot([xBar,xBar],plt.gca().get_ylim(),'--',color='k',linewidth=4)\n",
    "\n",
    "plt.legend(['Mean','Histogram'])\n",
    "plt.xlabel('Value')\n",
    "plt.ylabel('Count')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_distrWithMean.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "z_HmIWFEYf3Y"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "fMM3juMpYf6X"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.10: \"Failure\" scenarios for the mean"
   ],
   "metadata": {
    "id": "xubhI3wQs2Vq"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "_,axs = plt.subplots(1,2,figsize=(10,3))\n",
    "\n",
    "\n",
    "## case 1: mean does not reflect the most common value\n",
    "data = [0,0]\n",
    "data[0] = (np.random.randn(400)+2.5)**3 - 50\n",
    "\n",
    "## case 2: bimodal distribution\n",
    "x1 = np.random.randn(500) - 3\n",
    "x2 = np.random.randn(500) + 3\n",
    "data[1] = np.concatenate((x1,x2))\n",
    "\n",
    "\n",
    "# histograms and means\n",
    "for i in range(2):\n",
    "\n",
    "  # data average\n",
    "  xBar = np.mean(data[i])\n",
    "\n",
    "  # histogram with vertical line for mean\n",
    "  axs[i].hist(data[i],bins='fd',color='gray',edgecolor='k')\n",
    "  axs[i].plot([xBar,xBar],axs[i].get_ylim(),'--',color='k',linewidth=4)\n",
    "  axs[i].set_xlabel('Value')\n",
    "  axs[i].set_ylabel('Count')\n",
    "\n",
    "\n",
    "axs[0].set_title(r'$\\bf{A}$)  Non-symmetric distribution')\n",
    "axs[1].set_title(r'$\\bf{B}$)  Bimodal distribution')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_meanFailures.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "imFOLigddmB3"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "1v86mxiUYf9T"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.11: Median and mean"
   ],
   "metadata": {
    "id": "AM9z_SOGYgAL"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# generate a Laplace distribution\n",
    "x1 = np.exp(-np.abs(3*np.random.randn(4000)))\n",
    "x2 = np.exp(-np.abs(3*np.random.randn(4000)))\n",
    "x = x1-x2+1\n",
    "\n",
    "# and compute its mean\n",
    "mean = np.mean(x)\n",
    "median = np.median(x)\n",
    "\n",
    "# histogram\n",
    "fig = plt.figure(figsize=(5,5))\n",
    "plt.hist(x,bins='fd',color=(.7,.7,.7),edgecolor='k')\n",
    "\n",
    "# vertical lines for mean and median\n",
    "plt.plot([mean,mean],plt.gca().get_ylim(),'--',\n",
    "         color='k',linewidth=4,label='Mean')\n",
    "plt.plot([median,median],plt.gca().get_ylim(),':',\n",
    "         color='gray',linewidth=4,label='Median')\n",
    "\n",
    "plt.legend()\n",
    "plt.xlabel('Value')\n",
    "plt.ylabel('Count')\n",
    "plt.yticks([])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_distrWithMeanAndMedian.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "51MkBhA6zcRe"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "PQnqRU7NlAZL"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.12: \"Failures\" of the median"
   ],
   "metadata": {
    "id": "gcMsZRGclAmR"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "_,axs = plt.subplots(1,2,figsize=(10,3))\n",
    "\n",
    "\n",
    "## case 1: mean does not reflect\n",
    "data = [0,0]\n",
    "data[0] = (np.random.randn(400)+2.5)**3 - 50\n",
    "\n",
    "## case 2: bimodal distribution\n",
    "x1 = np.random.randn(500) - 3\n",
    "x2 = np.random.randn(500) + 3\n",
    "data[1] = np.concatenate((x1,x2))\n",
    "\n",
    "\n",
    "# histograms and means\n",
    "for i in range(2):\n",
    "\n",
    "  # data average\n",
    "  mean = np.mean(data[i])\n",
    "  median = np.median(data[i])\n",
    "\n",
    "  # histogram with vertical line for mean\n",
    "  axs[i].hist(data[i],bins='fd',color='gray',edgecolor='k')\n",
    "  axs[i].plot([mean,mean],axs[i].get_ylim(),'--',\n",
    "          color='k',linewidth=4,label='Mean')\n",
    "  axs[i].plot([median,median],axs[i].get_ylim(),':',\n",
    "          color='gray',linewidth=4,label='Median')\n",
    "  axs[i].set_xlabel('Value')\n",
    "  axs[i].set_ylabel('Count')\n",
    "  axs[i].legend()\n",
    "\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_medianFailures.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "XwYXKoJ9zauB"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "B9GBL7mSzaxC"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.13: Mode"
   ],
   "metadata": {
    "id": "4Pt-WblN2Cka"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# the data\n",
    "data = {\n",
    "    'Monday'   :  5,\n",
    "    'Tuesday'  :  7,\n",
    "    'Wednesday': 14,\n",
    "    'Thursday' :  4,\n",
    "    'Friday'   :  3,\n",
    "    'Saturday' : 11,\n",
    "    'Sunday'   : 14\n",
    "}\n",
    "\n",
    "\n",
    "# compute the mode\n",
    "mode = stats.mode(list(data.values()),keepdims=True)\n",
    "mode[0]\n",
    "\n",
    "\n",
    "# plot the data and mode\n",
    "plt.bar(range(len(data)),data.values(),color='k')\n",
    "plt.plot([-.5,len(data)-.5],[mode[0],mode[0]],'--',color='gray')\n",
    "plt.xticks(range(len(data)),labels=data.keys(),rotation=90)\n",
    "plt.yticks(range(0,15,2))\n",
    "plt.ylabel('Counts')\n",
    "plt.title('Preferred washing days',loc='center')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_washingMode.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "rRd4UvkZ2EBp"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "JHtJf-612Ch6"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.14: Dispersion"
   ],
   "metadata": {
    "id": "hQUEK02C2CfD"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# restaurant managers salaries and randomly selected salaries\n",
    "X1 = np.random.randn(1000)*2 + 70\n",
    "X2 = np.random.randn(1000)*8 + 70\n",
    "\n",
    "_,axs = plt.subplots(2,2,figsize=(10,6))\n",
    "\n",
    "axs[0,0].plot(X1,'ko',markerfacecolor='w')\n",
    "axs[0,0].set_ylim([40,100])\n",
    "axs[0,0].set_xlim([-5,len(X1)+5])\n",
    "axs[0,0].set_title(r\"$\\bf{A}$)  Restaurant manager salaries\")\n",
    "axs[0,0].set_ylabel('Salary (thousands)')\n",
    "axs[0,0].set_xlabel('Data sample index')\n",
    "\n",
    "\n",
    "axs[0,1].plot(X2,'ko',markerfacecolor='w')\n",
    "axs[0,1].set_ylim([40,100])\n",
    "axs[0,1].set_xlim([-5,len(X2)+5])\n",
    "axs[0,1].set_title(r\"$\\bf{B}$)  Random salaries\")\n",
    "axs[0,1].set_ylabel('Salary (thousands)')\n",
    "axs[0,1].set_xlabel('Data sample index')\n",
    "\n",
    "\n",
    "\n",
    "axs[1,0].boxplot([X1,X2])\n",
    "axs[1,0].set_xticklabels(['Restaurants','Random'])\n",
    "axs[1,0].set_title(r'$\\bf{C}$)  Box plots')\n",
    "\n",
    "\n",
    "\n",
    "y1,x = np.histogram(X1,bins=np.linspace(np.min(X2),np.max(X2),41))\n",
    "y2,x = np.histogram(X2,bins=np.linspace(np.min(X2),np.max(X2),41))\n",
    "x = (x[:-1]+x[1:])/2\n",
    "axs[1,1].plot(x,y1,'k',linewidth=3,label='Restaurants')\n",
    "axs[1,1].plot(x,y2,'--',color='gray',linewidth=3,label='Random')\n",
    "axs[1,1].set_xlim([np.min(X2),np.max(X2)+2])\n",
    "axs[1,1].legend()\n",
    "axs[1,1].set_xlabel('Salary (thousands)')\n",
    "axs[1,1].set_ylabel('Counts')\n",
    "axs[1,1].set_title(r'$\\bf{D}$)  Histograms')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_dispersion.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "lnsYL5dv2CcY"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "mJPGQwAq2CaB"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.16: Homo/heteroscedasticity"
   ],
   "metadata": {
    "id": "4v_XJq18BOb_"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# sample size and x-axis grid\n",
    "N = 2345\n",
    "x = np.linspace(1,10,N)\n",
    "\n",
    "# generate some data\n",
    "ho = np.random.randn(N)\n",
    "he = np.random.randn(N) * x\n",
    "\n",
    "\n",
    "## visualize\n",
    "_,axs = plt.subplots(1,2,figsize=(10,4))\n",
    "\n",
    "axs[0].plot(x,ho,'ko',markersize=10,markerfacecolor=(.7,.7,.7),alpha=.3)\n",
    "axs[0].set(xlabel='Data index',xticks=[],yticks=[],ylabel='Data value')\n",
    "axs[0].set_title(r'$\\bf{A}$) Homoscedasticity')\n",
    "\n",
    "axs[1].plot(x,he,'ks',markersize=10,markerfacecolor=(.7,.7,.7),alpha=.3)\n",
    "axs[1].set(xlabel='Data index',xticks=[],yticks=[],ylabel='Data value')\n",
    "axs[1].set_title(r'$\\bf{B}$) Heteroscedasticity')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_homohetero.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "CILlMeD_BO-F"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "vYv3xfskBO7m"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.17: FWHM"
   ],
   "metadata": {
    "id": "B-4EFuK42CXP"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# try on pdf to comapre with analytic\n",
    "x = np.linspace(-8,8,1001)\n",
    "s = 1.9\n",
    "\n",
    "# create the Gaussian and compute its analytic FWHM\n",
    "pureGaus = np.exp( (-x**2)/(2*s**2) )\n",
    "fwhm = 2*s*np.sqrt(2*np.log(2))\n",
    "\n",
    "plt.figure(figsize=(10,6))\n",
    "\n",
    "# plot guide lines\n",
    "plt.plot(x[[0,-1]],[.5,.5],'--',color=(.9,.9,.9))\n",
    "plt.plot(x[[0,-1]],[1,1],'--',color=(.9,.9,.9))\n",
    "plt.plot(x[[0,-1]],[0,0],'--',color=(.9,.9,.9))\n",
    "plt.plot([0,0],[0,1],'--',color=(.9,.9,.9))\n",
    "plt.plot([-fwhm/2,-fwhm/2],[0,.5],'--',color=(.5,.5,.5))\n",
    "plt.plot([fwhm/2,fwhm/2],[0,.5],'--',color=(.5,.5,.5))\n",
    "\n",
    "# plot the gaussian\n",
    "plt.plot(x,pureGaus,'k',linewidth=3)\n",
    "\n",
    "# plot arrows\n",
    "plt.arrow(-fwhm/2,.5,fwhm,0, color=(.5,.5,.5),linewidth=2,zorder=10,\n",
    "          head_width=.05,head_length=.5,length_includes_head=True)\n",
    "plt.arrow(fwhm/2,.5,-fwhm,0, color=(.5,.5,.5),linewidth=2,zorder=10,\n",
    "          head_width=.05,head_length=.5,length_includes_head=True)\n",
    "\n",
    "plt.text(0,.52,'FWHM',horizontalalignment='center',fontsize=20)\n",
    "plt.xlim(x[[0,-1]])\n",
    "plt.yticks([0,.5,1],labels=['0%','50%','100%'])\n",
    "plt.ylabel('Gain')\n",
    "plt.title(f'FWHM = {fwhm:.2f}',loc='center')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_FWHM_def.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "MrNUMaTplu3t"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "hrFrVSxLyLYe"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.18: Fano factor"
   ],
   "metadata": {
    "id": "9YXbcnKjyL1Z"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# keep mean fixed while varying sigma and show histograms\n",
    "\n",
    "fanos = [ .1,1,10 ]\n",
    "mean = 40\n",
    "\n",
    "# deriving the std from the fano factor:\n",
    "# ff = s^2/m\n",
    "#  s = sqrt(ff*m)\n",
    "\n",
    "\n",
    "plt.figure(figsize=(5,5))\n",
    "lines = [ '-','--',':' ]\n",
    "for i in range(len(fanos)):\n",
    "\n",
    "  # generate data\n",
    "  sigma = np.sqrt(fanos[i]*mean)\n",
    "  x = np.random.normal(mean,sigma,size=10000)\n",
    "\n",
    "  # compute empirical fano factor\n",
    "  ff = np.var(x,ddof=1) / np.mean(x)\n",
    "\n",
    "  # get histogram\n",
    "  yy,xx = np.histogram(x,bins=50)\n",
    "\n",
    "  # plot the line\n",
    "  c = i*.9/len(fanos) # grayscale defined by index\n",
    "  plt.plot((xx[0:-1]+xx[1:])/2,yy,linestyle=lines[i],\n",
    "           linewidth=3,color=(c,c,c),label=f'FF = {ff:.2f}')\n",
    "\n",
    "\n",
    "plt.legend()\n",
    "plt.xlabel('Data value')\n",
    "plt.ylabel('Count')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_randnFF.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "oRd6WUVuyMVr"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "5cRZiZNIlu9I"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.20: IQR"
   ],
   "metadata": {
    "id": "jz5MNW0glu_r"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# Create a dataset to work with\n",
    "X = np.exp( np.random.randn(1000)/3 )\n",
    "\n",
    "# and find its quartiles\n",
    "quartiles = np.percentile(X,[25,50,75])\n",
    "\n",
    "# plot the histogram\n",
    "plt.figure(figsize=(10,5))\n",
    "plt.hist(X,bins='fd',color=(.9,.9,.9),edgecolor=(.6,.6,.6))\n",
    "\n",
    "# and draw lines for the quartiles\n",
    "ylim = plt.gca().get_ylim()\n",
    "for q in quartiles:\n",
    "  plt.arrow(q,ylim[1],0,-ylim[1]+1, color='k',linewidth=3,\n",
    "          head_width=.1,head_length=10,length_includes_head=True)\n",
    "\n",
    "# horizontal arrow\n",
    "plt.arrow(quartiles[0],ylim[1]/2,quartiles[2]-quartiles[0],0, color=(.5,.5,.5),linewidth=2,zorder=10,\n",
    "          head_width=5,head_length=.05,length_includes_head=True)\n",
    "plt.arrow(quartiles[2],ylim[1]/2,quartiles[0]-quartiles[2],0, color=(.5,.5,.5),linewidth=2,zorder=10,\n",
    "          head_width=5,head_length=.05,length_includes_head=True)\n",
    "\n",
    "plt.text(np.mean(quartiles[[0,2]]),ylim[1]/2+5,'IQR',horizontalalignment='center',fontsize=20,backgroundcolor=(.6,.6,.6))\n",
    "\n",
    "for i in range(3):\n",
    "  plt.text(quartiles[i],ylim[1],'Q'+str(i+1),horizontalalignment='center',verticalalignment='bottom',fontsize=20,backgroundcolor='w')\n",
    "\n",
    "\n",
    "# some additional plotting niceties...\n",
    "plt.xlabel('Data value')\n",
    "plt.xticks([])\n",
    "plt.ylabel('Histogram count')\n",
    "plt.gca().spines['right'].set_visible(False)\n",
    "plt.gca().spines['top'].set_visible(False)\n",
    "\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_IQR.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "9ZNAh1X2TwJI"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "VeHO-wEsah7Q"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.21: Create a QQ plot"
   ],
   "metadata": {
    "id": "GIFbtYueah4S"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# generate two datasets\n",
    "N = 1000\n",
    "d1 = np.random.randn(N) # normal\n",
    "d2 = np.exp( d1*.8 )    # non-normal\n",
    "\n",
    "\n",
    "# data for histograms\n",
    "y1,x1 = np.histogram(d1,bins=40)\n",
    "y1 = y1/np.max(y1)\n",
    "x1 = (x1[:-1]+x1[1:])/2\n",
    "\n",
    "y2,x2 = np.histogram(d2,bins=40)\n",
    "y2 = y2/np.max(y2)\n",
    "x2 = (x2[:-1]+x2[1:])/2\n",
    "\n",
    "\n",
    "\n",
    "# analytic normal distribution\n",
    "x = np.linspace(-4,4,10001)\n",
    "norm = stats.norm.pdf(x)\n",
    "norm = norm/np.max(norm)\n",
    "\n",
    "\n",
    "## now generate the plots\n",
    "_,axs = plt.subplots(2,2,figsize=(10,7))\n",
    "axs[0,0].plot(x1,y1,'k',linewidth=2,label='Empirical')\n",
    "axs[0,0].plot(x,norm,'--',color='gray',linewidth=2,label='Analytic')\n",
    "axs[0,0].legend()\n",
    "axs[0,0].set_xlabel('Data value')\n",
    "axs[0,0].set_ylabel('Probability (norm.)')\n",
    "axs[0,0].set_title(r'$\\bf{A}$)  Distributions')\n",
    "\n",
    "axs[0,1].plot(x2,y2,'k',linewidth=2,label='Empirical')\n",
    "axs[0,1].plot(x,norm,'--',color='gray',linewidth=2,label='Analytic')\n",
    "axs[0,1].legend()\n",
    "axs[0,1].set_xlabel('Data value')\n",
    "axs[0,1].set_ylabel('Probability (norm.)')\n",
    "axs[0,1].set_title(r'$\\bf{B}$)  Distributions')\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# QQ plots\n",
    "stats.probplot(d1,plot=axs[1,0],fit=True)\n",
    "stats.probplot(d2,plot=axs[1,1],fit=True)\n",
    "\n",
    "for i in range(2):\n",
    "  axs[1,i].get_lines()[0].set(markerfacecolor='k',markeredgecolor='k')\n",
    "  axs[1,i].get_lines()[1].set(color='gray',linewidth=3)\n",
    "  axs[1,i].set_title(' ')\n",
    "  axs[1,i].set_ylabel('Data values (sorted)')\n",
    "\n",
    "axs[1,0].set_title(r'$\\bf{C}$)  QQ plot')\n",
    "axs[1,1].set_title(r'$\\bf{D}$)  QQ plot')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_qq.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "QGzG9vmYD2Lw"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "JFw4sRLsHFg8"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.22: Table for moments"
   ],
   "metadata": {
    "id": "ZQp17gNUHFeX"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# table data\n",
    "colLabs = ['Moment number','Name','Description','Formula']\n",
    "\n",
    "tableData = [ [ 'First'  ,  'Mean'     ,  'Average'      ,  r'$m_1 = N^{-1}\\sum_{i=1}^N x_i$' ],\n",
    "              [ 'Second' ,  'Variance' ,  'Dispersion'   ,  r'$m_2 = N^{-1}\\sum_{i=1}^N (x_i-\\bar{x})^2$' ],\n",
    "              [ 'Third'  ,  'Skew'     ,  'Asymmetry'    ,  r'$m_3 = (N\\sigma^3)^{-1}\\sum_{i=1}^N (x_i-\\bar{x})^3$' ],\n",
    "              [ 'Fourth' ,  'Kurtosis' ,  'Tail fatness' ,  r'$m_4 = (N\\sigma^4)^{-1}\\sum_{i=1}^N (x_i-\\bar{x})^4$' ]\n",
    "]\n",
    "\n",
    "\n",
    "# draw the table\n",
    "fig, ax = plt.subplots(figsize=(10,4))\n",
    "ax.set_axis_off()\n",
    "ht = ax.table(\n",
    "        cellText   = tableData,\n",
    "        colLabels  = colLabs,\n",
    "        colColours = [(.8,.8,.8)] * len(colLabs),\n",
    "        cellLoc    = 'center',\n",
    "        loc        = 'upper left',\n",
    "        )\n",
    "\n",
    "\n",
    "# some adjustments to the fonts etc\n",
    "ht.scale(1,3.8)\n",
    "ht.auto_set_font_size(False)\n",
    "ht.set_fontsize(14)\n",
    "\n",
    "from matplotlib.font_manager import FontProperties\n",
    "for (row, col), cell in ht.get_celld().items():\n",
    "  cell.set_text_props(fontproperties=FontProperties(family='serif'))\n",
    "  if row==0: cell.set_text_props(fontproperties=FontProperties(weight='bold',size=16))\n",
    "  if col<3 and row>0: cell.set_text_props(fontproperties=FontProperties(size=16))\n",
    "\n",
    "# export\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_table_moments1.png', bbox_inches='tight')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "_m5h6cnUZaFE"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "## Note about tables, in case you were wondering:\n",
    "# I find latex-created tables to be ugly and really difficult to customize and fit on the page.\n",
    "# Making the tables as a matplotlib figure looks nicer."
   ],
   "metadata": {
    "id": "CESnO3sPKj2Q"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "S-NGpGspwGEN"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.23: Illustration of skew"
   ],
   "metadata": {
    "id": "DrAIQBwewGBE"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# create data using F distribution\n",
    "x = np.linspace(0,4,10001)\n",
    "y = stats.f.pdf(x,10,100)\n",
    "y = y/np.max(y)\n",
    "yBar = np.sum(y)*np.mean(np.diff(x))\n",
    "\n",
    "# plot the distributions\n",
    "_,axs = plt.subplots(2,1,figsize=(4,5))\n",
    "axs[0].plot(-x,y,'k',linewidth=3)\n",
    "axs[0].plot([-yBar,-yBar],[0,1],'k--')\n",
    "axs[0].set_title(r'$\\bf{A}$)  Left (negative) skew')\n",
    "axs[0].set(xlim=-x[[-1,0]])\n",
    "axs[0].set_ylabel('Proportion')\n",
    "axs[0].set_xlabel('Data value')\n",
    "axs[0].text(-3.5,.7,r'$\\sum (x-\\overline{x})^3 < 0$')\n",
    "\n",
    "axs[1].plot(x,y,'k',linewidth=3)\n",
    "axs[1].plot([yBar,yBar],[0,1],'k--')\n",
    "axs[1].set_title(r'$\\bf{B}$)  Right (positive) skew')\n",
    "axs[1].set(xlim=x[[0,-1]])\n",
    "axs[1].text(2,.7,r'$\\sum (x-\\overline{x})^3 > 0$')\n",
    "\n",
    "\n",
    "for a in axs: a.set(xticks=[],yticks=[])\n",
    "\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_skew.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "cUVZpYaUwJVS"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "9fW8OO-ml-X9"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.24: Kurtosis"
   ],
   "metadata": {
    "id": "9fvULjpul-s0"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# generate the distributions\n",
    "x = np.linspace(-3,3,10001)\n",
    "g = [None]*3\n",
    "g[0] = np.exp( -.5*x**2 )\n",
    "g[1] = np.exp(    -x**2 )\n",
    "g[2] = np.exp( -10*x**4 )\n",
    "\n",
    "\n",
    "# generate the plot\n",
    "s = ['--','-',':']\n",
    "n = ['-ve kurtosis','No excess','+ve kurtosis']\n",
    "for i in range(3):\n",
    "  plt.plot(x,g[i],color=(i/3,i/3,i/3),linewidth=3,linestyle=s[i],label=n[i])\n",
    "\n",
    "plt.legend(loc='upper right',bbox_to_anchor=[1.02,1.02])\n",
    "plt.xlim(x[[0,-1]])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_kurtosis.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "G-Sd1HZiwJY7"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "MTvCFb5NGfYw"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.25: Different kurtosis with the same variance"
   ],
   "metadata": {
    "id": "Ev7KkGmg3mGW"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# use the following 5 lines to manipulate kurtosis\n",
    "x1 = np.exp(-np.abs(3*np.random.randn(4000)))\n",
    "x2 = np.exp(-np.abs(3*np.random.randn(4000)))\n",
    "d1 = x1-x2+1\n",
    "d2 = np.random.rand(4000)\n",
    "d3 = np.random.randn(4000)\n",
    "\n",
    "# uncomment the following 3 lines to manipulate skew\n",
    "# d1 = np.random.randn(4000)\n",
    "# d2 = np.exp(np.random.randn(4000)/2)\n",
    "# d3 = -np.exp(np.random.randn(4000)/2)\n",
    "\n",
    "# gather into a list\n",
    "data = [d1,d2,d3]\n",
    "\n",
    "S = np.zeros((len(data),4))\n",
    "i = 0\n",
    "datalabel = []\n",
    "\n",
    "plt.figure(figsize=(4,4))\n",
    "\n",
    "for X in data:\n",
    "\n",
    "  # optional normalization\n",
    "  X = (X-np.mean(X)) / np.std(X)\n",
    "\n",
    "  # histogram and plot\n",
    "  y1,x1 = np.histogram(X,bins='fd')\n",
    "  x1 = (x1[1:]+x1[:-1])/2\n",
    "  y1 = 100*y1/np.sum(y1)\n",
    "  plt.plot(x1,y1,linewidth=3,color=(i/3,i/3,i/3))\n",
    "  datalabel.append('d'+str(i+1))\n",
    "\n",
    "  # gather stats\n",
    "  S[i,:] = np.mean(X),np.var(X,ddof=1),stats.skew(X),stats.kurtosis(X)\n",
    "  i += 1\n",
    "\n",
    "\n",
    "plt.legend(datalabel)\n",
    "plt.xlim([np.min(x1),np.max(x1)])\n",
    "plt.xlabel('Data value')\n",
    "plt.ylabel('Percentage')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()\n",
    "\n",
    "\n",
    "# now print out the descriptive stats\n",
    "df = pd.DataFrame(S.T,columns=datalabel,index=['Mean','Variance','Skew','Kurtosis'])\n",
    "from IPython.display import display\n",
    "with pd.option_context('display.float_format','{:5.3f}'.format):\n",
    "  display(df)"
   ],
   "metadata": {
    "id": "hh4WiptE5R3V"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "7cG8Q6YRrnxX"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.27: Histogram bins"
   ],
   "metadata": {
    "id": "68TqEsi9KjrZ"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# table data\n",
    "colLabs = [ 'Method','Formula','Key advantage' ]\n",
    "\n",
    "tableData = [ [ 'Arbitrary'         ,  r'$k=40$'                              ,  'Simple' ],\n",
    "              [ 'Sturges'           ,  r'$k=\\lceil \\log_2(N)\\rceil+1$'        ,  'Depends on count' ],\n",
    "              [ 'Friedman-Diaconis' ,  r'$w=2\\frac{IQR}{\\sqrt[3]{N}}$' ,  'Depends on count and spread' ],\n",
    "]\n",
    "\n",
    "\n",
    "# draw the table\n",
    "fig, ax = plt.subplots(figsize=(10,2))\n",
    "ax.set_axis_off()\n",
    "ht = ax.table(\n",
    "        cellText   = tableData,\n",
    "        colLabels  = colLabs,\n",
    "        colColours = [(.8,.8,.8)] * len(colLabs),\n",
    "        cellLoc    = 'center',\n",
    "        loc        = 'upper left',\n",
    "        )\n",
    "\n",
    "\n",
    "# some adjustments to the fonts etc\n",
    "ht.scale(1,2.5)\n",
    "ht.auto_set_font_size(False)\n",
    "ht.set_fontsize(14)\n",
    "\n",
    "from matplotlib.font_manager import FontProperties\n",
    "for (row, col), cell in ht.get_celld().items():\n",
    "  cell.set_text_props(fontproperties=FontProperties(family='serif'))\n",
    "  if row==0: cell.set_text_props(fontproperties=FontProperties(weight='bold',size=16))\n",
    "\n",
    "# export\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_table_moments2.png', bbox_inches='tight')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "Uc00sP-qjvpC"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "kWv4gv8ZmsHG"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Figure 4.28: Cityscape of variable histogram bins"
   ],
   "metadata": {
    "id": "O8xR9BAbmsYA"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# define random bin edges\n",
    "bw = np.array([-3])\n",
    "while bw[-1]<3:\n",
    "  bw = np.append(bw,bw[-1]+np.random.rand(1))\n",
    "\n",
    "\n",
    "# create the histogram\n",
    "plt.figure(figsize=(8,4))\n",
    "_,_,hs = plt.hist(np.random.randn(1000),bins=bw,edgecolor='k')\n",
    "\n",
    "# assign random greyscale color to each bar\n",
    "for h in hs:\n",
    "  c = np.random.uniform(low=.1,high=.9,size=1)[0]\n",
    "  h.set_facecolor((c,c,c))\n",
    "\n",
    "plt.xlabel('Data value')\n",
    "plt.ylabel('Count')\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_variableBins.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "0W5Svj__Klug"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "Hc7ckW_HahyO"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 1"
   ],
   "metadata": {
    "id": "K6fF3lMAAGxE"
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "E_OH-MpAAC1c"
   },
   "outputs": [],
   "source": [
    "# function variable\n",
    "x = np.linspace(-3,3,111)\n",
    "\n",
    "sigma = .73\n",
    "\n",
    "# one gaussian\n",
    "a = 1 / (sigma*np.sqrt(2*np.pi))\n",
    "eTerm = -x**2 / (2*sigma**2)\n",
    "gaus = a * np.exp( eTerm )\n",
    "\n",
    "plt.plot(x,gaus,'k',linewidth=2)\n",
    "plt.xlim(x[[0,-1]])\n",
    "plt.xlabel('Numerical value')\n",
    "plt.ylabel('Probability')\n",
    "plt.title('Gaussian probability density',loc='center')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_analytic_gaussian.png')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "pjoEJc_lTch-"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# Create a family of Gaussians\n",
    "\n",
    "N = 50\n",
    "sigmas = np.linspace(.1,3,N)\n",
    "\n",
    "G = np.zeros((N,len(x)))\n",
    "\n",
    "for i in range(N):\n",
    "  a = 1 / (sigmas[i]*np.sqrt(2*np.pi))\n",
    "  eTerm = -x**2 / (2*sigmas[i]**2)\n",
    "  G[i,:] = a * np.exp( eTerm )"
   ],
   "metadata": {
    "id": "y1WtF3joA-iO"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# visualize a few\n",
    "\n",
    "whichGaussians = np.round(np.linspace(4,N-1,8)).astype(int)\n",
    "\n",
    "plt.plot(x,G[whichGaussians,:].T,linewidth=3)\n",
    "plt.xlabel('x')\n",
    "plt.ylabel('Height')\n",
    "plt.legend([f'$\\sigma = {s:.2f}$' for s in sigmas[whichGaussians]])\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "QSXEAawqFseI"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# now show as a matrix\n",
    "plt.imshow(G,extent=[x[0],x[-1],sigmas[0],sigmas[-1]],\n",
    "           cmap='gray',aspect='auto',origin='lower',\n",
    "           vmin=0,vmax=.8)\n",
    "plt.xlabel('x')\n",
    "plt.ylabel('$\\sigma$')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "ujkchZjkFZNU"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# this code is identical to that above, just in subplots for the book figure\n",
    "\n",
    "_,axs = plt.subplots(1,3,figsize=(12,4))\n",
    "\n",
    "# one gaussian\n",
    "axs[0].plot(x,gaus,'k',linewidth=2)\n",
    "axs[0].set_xlabel('x')\n",
    "axs[0].set_xlim(x[[0,-1]])\n",
    "axs[0].set_ylabel('Height')\n",
    "axs[0].set_title(r'$\\bf{A}$)')\n",
    "\n",
    "# a few gaussians\n",
    "axs[1].plot(x,G[whichGaussians,:].T,linewidth=2)\n",
    "axs[1].set_xlabel('x')\n",
    "axs[1].set_ylabel('Height')\n",
    "axs[1].set_xlim(x[[0,-1]])\n",
    "axs[1].legend([f'$\\sigma = {s:.2f}$' for s in sigmas[whichGaussians]],fontsize=10)\n",
    "axs[1].set_title(r'$\\bf{B}$)')\n",
    "\n",
    "# the gaussian family portrait\n",
    "axs[2].imshow(G,extent=[x[0],x[-1],sigmas[0],sigmas[-1]],\n",
    "           cmap='gray',aspect='auto',origin='lower',\n",
    "           vmin=0,vmax=.8)\n",
    "axs[2].set_xlabel('x')\n",
    "axs[2].set_ylabel('$\\sigma$')\n",
    "axs[2].set_title(r'$\\bf{C}$)')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_gaussians.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "4HbmpWYmA-f2"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "NyILcIKyQJHW"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 2"
   ],
   "metadata": {
    "id": "hivrPDWbQI_4"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# here's the sum\n",
    "print( np.sum(G,axis=1) )"
   ],
   "metadata": {
    "id": "8DojuAo3A-dN"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# we don't want the mean...\n",
    "print( np.mean(G,axis=1) ) #* np.mean(np.diff(x))"
   ],
   "metadata": {
    "id": "77Eisbk5A-aq"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# we want the discrete integral\n",
    "print( np.sum(G,axis=1) * np.mean(np.diff(x)) )"
   ],
   "metadata": {
    "id": "GGhc_FF7A-YG"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "he05WMChSW7Y"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 3"
   ],
   "metadata": {
    "id": "yqhfjPv9roK3"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "### the function\n",
    "\n",
    "def computeStats(data):\n",
    "\n",
    "  # for convenience\n",
    "  N = len(data)\n",
    "\n",
    "  ## mean\n",
    "  myMean = sum(data)/N\n",
    "\n",
    "  ## median\n",
    "  # first sort the data\n",
    "  dataSort = sorted(data)\n",
    "\n",
    "  # then compute the median based on whether it's odd or even N\n",
    "  if N%2==1: # odd\n",
    "    myMedian = dataSort[N//2] # N//2 == (N+1)/2\n",
    "  else: # even\n",
    "    myMedian = sum(dataSort[N//2-1:N//2+1])/2\n",
    "\n",
    "\n",
    "  ## variance\n",
    "  myVar = sum([(i-myMean)**2 for i in data]) / (N-1)\n",
    "\n",
    "  return myMean,myMedian,myVar\n"
   ],
   "metadata": {
    "id": "zKyB38bbroIF"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# the specified data\n",
    "data = [ 1,7,2,7,3,7,4,7,5,7,6,7 ]\n",
    "\n",
    "# uncomment the following line for random integers (note the exclusive upper bound!)\n",
    "data = np.random.randint(low=4,high=21,size=24)\n",
    "\n",
    "# my results\n",
    "myMean,myMedian,myVar = computeStats(data)\n",
    "\n",
    "# ground-truth to compare your results:\n",
    "npMean   = np.mean(data)\n",
    "npMedian = np.median(data)\n",
    "npVar    = np.var(data,ddof=1)"
   ],
   "metadata": {
    "id": "7jaPQWV0rsFQ"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# print the results\n",
    "\n",
    "print('         |  Mine  |  numpy')\n",
    "print('----------------------------')\n",
    "print(f'Mean     | {myMean:5.2f}  | {npMean:5.2f}')\n",
    "print(f'Median   | {myMedian:5.2f}  | {npMedian:5.2f}')\n",
    "print(f'Variance | {myVar:5.2f}  | {npVar:5.2f}')"
   ],
   "metadata": {
    "id": "H5bopO31rn8l"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "zUdy3h8Irn5z"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 4"
   ],
   "metadata": {
    "id": "8OKRKDHHrn20"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# experiment parameters\n",
    "sampleSizes = np.arange(5,101)\n",
    "numExperiments = 25\n",
    "\n",
    "# initialize results matrix\n",
    "ddofImpact = np.zeros((len(sampleSizes),numExperiments))\n",
    "\n",
    "# double for-loop over sample sizes and experiment repetitions\n",
    "for ni in range(len(sampleSizes)):\n",
    "  for expi in range(numExperiments):\n",
    "\n",
    "    # generate random data\n",
    "    data = np.random.randint(low=-100,high=101,size=sampleSizes[ni])\n",
    "\n",
    "    # compute variance difference\n",
    "    varDiff = np.var(data,ddof=1)-np.var(data,ddof=0)\n",
    "\n",
    "    # uncomment the lines below for exercise 5\n",
    "    d = np.var(data,ddof=1)-np.var(data,ddof=0)\n",
    "    a = np.var(data,ddof=1)+np.var(data,ddof=0)\n",
    "    #varDiff = d/a\n",
    "\n",
    "    # store magnitude\n",
    "    ddofImpact[ni,expi] = varDiff"
   ],
   "metadata": {
    "id": "mC6NCsHH2zRH"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# compute average and std across experiment runs\n",
    "meanDiffs = np.mean(ddofImpact,axis=1)\n",
    "stdDiffs  = np.std(ddofImpact,axis=1,ddof=1)\n",
    "\n",
    "\n",
    "# now for visualization\n",
    "plt.figure(figsize=(10,4))\n",
    "plt.errorbar(sampleSizes,meanDiffs,stdDiffs,\n",
    "             color='k',marker='.',fmt=' ',capsize=3)\n",
    "\n",
    "# make the plot look a bit nicer\n",
    "plt.title('Impact of ddof parameter',loc='center')\n",
    "plt.xlabel('Sample size')\n",
    "plt.ylabel('Variance difference')\n",
    "plt.xlim([sampleSizes[0]-2,sampleSizes[-1]+2])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_varDiffs.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "kcSZD3DW452P"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "o4IxHNXt5VIy"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 5"
   ],
   "metadata": {
    "id": "FfUwtFKC2zUD"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# The code solution to this exercise is in the previous exercise.\n",
    "# Just uncomment the second calculation of variable varDiff."
   ],
   "metadata": {
    "id": "4szOJDMkiKgb"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# The reason why the normalized differences are simply 1/(2n-1) comes from\n",
    "# writing out the difference using the formula for variance. You'll find\n",
    "# that the summation terms cancel and only the 1/n or 1/(n-1) terms remain.\n",
    "# Then you apply a bit of algebra to reduce to 1/(2n-1)."
   ],
   "metadata": {
    "id": "iaNZKYlC2zXX"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "yr2vIqGkCgm8"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 6"
   ],
   "metadata": {
    "id": "u7q_pRmBCgjz"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# Compare mean and median with and without outliers, for large and small N\n",
    "\n",
    "Ns = [ 50,5000 ]\n",
    "\n",
    "# initialize results matrices\n",
    "means = np.zeros((2,2))\n",
    "medians = np.zeros((2,2))\n",
    "\n",
    "\n",
    "_,axs = plt.subplots(2,2,figsize=(8,6))\n",
    "\n",
    "for ni in range(len(Ns)):\n",
    "\n",
    "  # create the data as normal random numbers\n",
    "  data = np.random.randn(Ns[ni])\n",
    "\n",
    "  for outi in range(2):\n",
    "\n",
    "    # I created an outlier by squaring the largest random sample\n",
    "    maxval,maxidx = np.max(data),np.argmax(data)\n",
    "    data[maxidx] = maxval**([1,4][outi])\n",
    "\n",
    "    # store results in matrices\n",
    "    means[ni,outi] = np.mean(data)\n",
    "    medians[ni,outi] = np.median(data)\n",
    "\n",
    "    # plot\n",
    "    h = axs[ni,outi].hist(data,bins=np.linspace(-3,3,21),color='gray')\n",
    "    maxY = np.max(h[0])\n",
    "    axs[ni,outi].plot([np.mean(data),np.mean(data)],[0,maxY],color=[.7,.7,.7],linewidth=3,label='Mean')\n",
    "    axs[ni,outi].plot([np.median(data),np.median(data)],[0,maxY],'k--',linewidth=3,label='Median')\n",
    "    axs[ni,outi].legend(fontsize=12)\n",
    "    axs[ni,outi].set_xlim([-3,3])\n",
    "    axs[ni,outi].set_title(f'N={Ns[ni]}, {(\"no\",\"with\")[outi]} outlier',loc='center')\n",
    "\n",
    "\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_outliersN.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "4S9HgRguCvUz"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# print results\n",
    "for ni in range(2):\n",
    "  print(f'With N = {Ns[ni]}, the mean increased by {means[ni,1]-means[ni,0]:.2f}')\n",
    "  print(f'With N = {Ns[ni]}, the median increased by {medians[ni,1]-medians[ni,0]:.2f}')\n",
    "  print(' ')"
   ],
   "metadata": {
    "id": "2TI3ARllCvSN"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "nqmhdqeb2zis"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 7"
   ],
   "metadata": {
    "id": "CbuI4lwQPbup"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "mean, variance, skew, kurtosis = stats.norm.stats(loc=1,scale=2,moments='mvsk')\n",
    "# for other distributions, replace 'norm' with uniform, lognorm, or expnorm\n",
    "\n",
    "print(f' Average: {mean:.2f}')\n",
    "print(f'Variance: {variance:.2f}')\n",
    "print(f'Skewness: {skew:.2f}')\n",
    "print(f'Kurtosis: {kurtosis:.2f}')"
   ],
   "metadata": {
    "id": "vPaux_S1Pdmh"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# url to site with list of scipy distribution options:\n",
    "# https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions"
   ],
   "metadata": {
    "id": "8TXVhub_Pbrk"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "nRU3YF6S3mJn"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 8"
   ],
   "metadata": {
    "id": "guDsqh102zdc"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# a function to compute and return moments\n",
    "def moments(data):\n",
    "  # 1st moment is the mean\n",
    "  mean = np.mean(data)\n",
    "\n",
    "  # 2nd moment is variance\n",
    "  var = np.var(data,ddof=1)\n",
    "\n",
    "  # 3rd moment is skew\n",
    "  skew = stats.skew(data)\n",
    "\n",
    "  # 4th moment is kurtosis\n",
    "  kurt = stats.kurtosis(data)\n",
    "\n",
    "  # put them all together\n",
    "  return mean,var,skew,kurt"
   ],
   "metadata": {
    "id": "xRLVkETSIZJv"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# generate the data\n",
    "sigmas = np.linspace(.1,1.2,20)\n",
    "X = np.random.randn(13524)\n",
    "\n",
    "# initialize the data results matrices\n",
    "M = np.zeros((len(sigmas),4))\n",
    "data = [0]*len(sigmas)\n",
    "\n",
    "# compute and store all moments in a matrix\n",
    "for i,s in enumerate(sigmas):\n",
    "  data[i] = np.exp(X*s) # create the data\n",
    "  M[i,:] = moments( data[i] ) # compute its moments"
   ],
   "metadata": {
    "id": "4zajWkvaIZMx"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "_,axs = plt.subplots(1,2,figsize=(10,4))\n",
    "\n",
    "# plot the moments\n",
    "m = 'osp*' # markers\n",
    "l = ['-','--',':','-.'] # line type\n",
    "c = [0,.2,.4,.6] # grayscale\n",
    "\n",
    "# plot each line separately\n",
    "for i in range(4):\n",
    "  axs[0].plot(sigmas,M[:,i],m[i]+l[i],color=np.ones(3)*c[i],\n",
    "              linewidth=2,markersize=8)\n",
    "\n",
    "axs[0].set_xlabel('$\\sigma$')\n",
    "axs[0].set_ylabel('Moment value')\n",
    "axs[0].legend(['Mean','Var.','Skew','Kurt.'])\n",
    "axs[0].set_title(r'$\\bf{A}$)  Statistical moments')\n",
    "\n",
    "\n",
    "\n",
    "# now to plot selected distributions\n",
    "for i in np.linspace(0,len(sigmas)-1,5).astype(int):\n",
    "\n",
    "  # get histogram values\n",
    "  y,x = np.histogram(data[i],bins='fd')\n",
    "\n",
    "  # plot as line\n",
    "  h = axs[1].plot((x[:-1]+x[1:])/2,y,'.-',linewidth=2,markersize=8,\n",
    "              label=f'$\\sigma={sigmas[i]:.2f}$')\n",
    "\n",
    "  # add the vertical lines in the moments plot\n",
    "  axs[0].annotate(f'{sigmas[i]:.2f}',xy=(sigmas[i],.1),horizontalalignment='center',fontsize=10,\n",
    "                  xytext=(sigmas[i],axs[0].get_ylim()[1]/3),arrowprops=dict(width=4,linewidth=0,color=h[0].get_color(),alpha=.8))\n",
    "\n",
    "# some niceties\n",
    "axs[1].legend()\n",
    "axs[1].set_xlim([0,6])\n",
    "axs[1].set_xlabel('Data value')\n",
    "axs[1].set_ylabel('Count')\n",
    "axs[1].set_title(r'$\\bf{B}$)  Distributions')\n",
    "\n",
    "# optional y-axis logarithmic scaling to see subtle changes in the other moments\n",
    "axs[0].set_yscale('log')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_moments.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "uXZ2Q-IgIZQA"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "44pbP_O12zgB"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 9"
   ],
   "metadata": {
    "id": "9MqDoe7arn0P"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# generate random datasets\n",
    "X = np.random.randn(10000)\n",
    "eX = np.exp(X)\n",
    "\n",
    "# iqr and std\n",
    "iqr = stats.iqr(X)\n",
    "std = np.std(X,ddof=1)\n",
    "\n",
    "eiqr = stats.iqr(eX)\n",
    "estd = np.std(eX,ddof=1)\n",
    "\n",
    "\n",
    "# print their comparison\n",
    "print('Normal distribution:')\n",
    "print(f'    IQR = {iqr:.3f}')\n",
    "print(f'1.35std = {1.35*std:.3f}')\n",
    "\n",
    "print('\\nLog-normal distribution:')\n",
    "print(f'    IQR = {eiqr:.3f}')\n",
    "print(f'1.35std = {1.35*estd:.3f}')"
   ],
   "metadata": {
    "id": "UaxJTNx8oU-V"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "_,axs = plt.subplots(1,2,figsize=(10,4))\n",
    "\n",
    "for i in range(2):\n",
    "\n",
    "  # select data\n",
    "  if i==0:\n",
    "    data = X\n",
    "  else:\n",
    "    data = eX\n",
    "\n",
    "  # convenience variable\n",
    "  mean = np.mean(data)\n",
    "  std  = np.std(data,ddof=1)/1.35\n",
    "\n",
    "\n",
    "  # histogram of the data and maximum height value\n",
    "  h = axs[i].hist(data,bins='fd',color=[.8,.8,.8])\n",
    "  maxY = np.max(h[0])\n",
    "\n",
    "  # standard deviation lines\n",
    "  axs[i].plot([mean+std,mean+std],[0,maxY],color=(.4,.4,.4),linestyle='--',linewidth=3,label='std/1.35')\n",
    "  axs[i].plot([mean-std,mean-std],[0,maxY],color=(.4,.4,.4),linestyle='--',linewidth=3)\n",
    "\n",
    "  # quartiles lines\n",
    "  quartiles = np.percentile(data,[25,75])\n",
    "  axs[i].plot([quartiles[0],quartiles[0]],[0,maxY],'k',linewidth=3,label='quartiles')\n",
    "  axs[i].plot([quartiles[1],quartiles[1]],[0,maxY],'k',linewidth=3)\n",
    "\n",
    "  axs[i].set(xlabel='Data value',ylabel='Count')\n",
    "  axs[i].legend()\n",
    "  if i==1: axs[i].set(xlim=[-1,10]) # manually set for e^X\n",
    "\n",
    "\n",
    "axs[0].set_title(r'$\\bf{A}$)  Normal data')\n",
    "axs[1].set_title(r'$\\bf{B}$)  Non-normal data')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_stdiqr.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "6oo78D2zoVBU"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "8SN_wpmYoVEM"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 10"
   ],
   "metadata": {
    "id": "C-D3jDt6bs4H"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# FWHM function\n",
    "def empFWHM(x,y):\n",
    "\n",
    "  # normalize to [0,1]\n",
    "  y = y-np.min(y)\n",
    "  y = y/np.max(y)\n",
    "\n",
    "  # find peak\n",
    "  idx = np.argmax(y)\n",
    "\n",
    "  # find value before peak\n",
    "  prePeak = x[ np.argmin(np.abs(y[:idx]-.5)) ]\n",
    "\n",
    "  # find value after peak\n",
    "  pstPeak = x[ idx-1+np.argmin(np.abs(y[idx:]-.5)) ]\n",
    "\n",
    "  # return fwhm as that distance\n",
    "  return pstPeak-prePeak,(prePeak,pstPeak)"
   ],
   "metadata": {
    "id": "2U12A2zobs6X"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# try on pdf to comapre with analytic\n",
    "x = np.linspace(-8,8,1001)\n",
    "s = 1.9\n",
    "\n",
    "# create an analytic Gaussian\n",
    "pureGaus = np.exp( (-x**2)/(2*s**2) )\n",
    "\n",
    "# empirical and analytic FWHM\n",
    "fwhm,halfpnts = empFWHM(x,pureGaus)\n",
    "afwhm = 2*s*np.sqrt(2*np.log(2))\n",
    "\n",
    "print(f'Empirical  FWHM = {fwhm:.2f}')\n",
    "print(f'Analytical FWHM = {afwhm:.2f}')\n",
    "\n",
    "# show the plot\n",
    "plt.plot(x,pureGaus,'k',linewidth=2)\n",
    "plt.plot([halfpnts[0],halfpnts[1]],[.5,.5],'k--')\n",
    "plt.title(f'Empirical: {fwhm:.2f}, Analytic: {afwhm:.2f}',loc='center')\n",
    "plt.xlim([x[0],x[-1]])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "TBgw7XjgoVKR"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# try on pdf to comapre with analytic\n",
    "ss = np.linspace(.1,5,15)\n",
    "\n",
    "fwhms = np.zeros((len(ss),2))\n",
    "\n",
    "for i,s in enumerate(ss):\n",
    "\n",
    "  # create the Gaussian (don't need the 'a' term b/c we're already normalizing)\n",
    "  gx = np.exp( (-x**2)/(2*s**2) )\n",
    "\n",
    "  # compute FWHM and other related quantities\n",
    "  fwhms[i,0] = empFWHM(x,gx)[0]\n",
    "  fwhms[i,1] = 2*s*np.sqrt(2*np.log(2))\n",
    "\n",
    "\n",
    "m = ('s','d')\n",
    "for i in range(2):\n",
    "  plt.plot(ss,fwhms[:,i],m[i],color=(i/3,i/3,i/3),markerfacecolor=(i/3,i/3,i/3),markersize=10)\n",
    "\n",
    "plt.legend(['Empirical','Analytical'])\n",
    "plt.xlabel(r'$\\sigma$ value')\n",
    "plt.ylabel('FWHM')\n",
    "\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_fwhm1.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "NtQmyYtKgeRg"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# The problem is that the later Gaussians don't have a wide enough x-axis range,\n",
    "# so the normalization distorts the Gaussian. This is illustrated in the code below.\n",
    "# Increasing the range of the x-axis grid will fix the problem.\n",
    "\n",
    "# normalized version\n",
    "y = gx-np.min(gx)\n",
    "y = y/np.max(y)\n",
    "\n",
    "# plot original and normalized\n",
    "plt.plot(x,gx,label='Original')\n",
    "plt.plot(x,y,label='Normalized')\n",
    "\n",
    "# some additional thingies\n",
    "plt.plot(x[[0,-1]],[0,0],'k--')\n",
    "plt.ylim([-.05,1.05])\n",
    "plt.xlim(x[[0,-1]])\n",
    "plt.legend()\n",
    "\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "8XaFnokCgeOq"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# Now for an empirical histogram\n",
    "\n",
    "# try on random data\n",
    "data = np.random.randn(12345)\n",
    "\n",
    "# histogram\n",
    "y,x = np.histogram(data,bins=100)\n",
    "x = (x[1:]+x[:-1])/2\n",
    "\n",
    "# estimate the FWHM\n",
    "h,peeks = empFWHM(x,y)\n",
    "midheight = (np.max(y)-np.min(y))/2\n",
    "\n",
    "# and plot\n",
    "plt.plot(x,y,'ko-',markerfacecolor='w')\n",
    "plt.plot([peeks[0],peeks[1]],[midheight,midheight],'k--')\n",
    "plt.title(f'Empirical FWHM = {h:.2f}',loc='center')\n",
    "plt.xlim([x[0],x[-1]])\n",
    "plt.xlabel('Data value')\n",
    "plt.ylabel('Count')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_fwhm2.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "x8Gk6qL9oVNM"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "FlAs32WngeL_"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Exercise 11"
   ],
   "metadata": {
    "id": "HXtpFgt_geJY"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# generate data\n",
    "N = 1000\n",
    "data = np.random.randn(N)\n",
    "# data = np.random.rand(N) # uncomment for uniform noise\n",
    "\n",
    "binOpt = [ 40,'fd','sturges','scott' ]\n",
    "\n",
    "for bin in binOpt:\n",
    "\n",
    "  # extract histogram\n",
    "  y,x = np.histogram(data,bins=bin)\n",
    "  x = (x[1:]+x[:-1])/2\n",
    "\n",
    "  # plot it\n",
    "  plt.plot(x,y,'s-',linewidth=3,label=bin)\n",
    "\n",
    "\n",
    "plt.legend()\n",
    "plt.xlabel('Data value')\n",
    "plt.ylabel('Count')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.savefig('desc_ex_histbins.png')\n",
    "plt.show()"
   ],
   "metadata": {
    "id": "2YxFkVZKCFmN"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "### Some observations about this exercise:\n",
    "# - Some binnings give \"taller\" distributions, because they have fewer bins;\n",
    "#   fewer bins means more data points per bin. Try normalizing the histograms\n",
    "#   by plotting y/np.max(y)\n",
    "#\n",
    "# - When you use uniformly distributed data, it looks like some histograms disappear,\n",
    "#   but in fact the different rules give identical bin counts so the histograms overlap.\n",
    "#"
   ],
   "metadata": {
    "id": "t6ppCM6ZCFo6"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [],
   "metadata": {
    "id": "pySXHdS5Pcg9"
   },
   "execution_count": null,
   "outputs": []
  }
 ]
}
