{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Bayesian Treatment and Response\n",
    "\n",
    "In this notebook we outline a method for estimating treatment response models using Bayesian methods. We focus initially on the discrete-treatment-continuous-response: some cases receive a treatment (belong to a union, receive training, smoke, etc.), while some do not. For each group we have a continuous measure of outcomes (i.e., wages, child birthweight). \n",
    "\n",
    "Why Bayesian methods? Why not just stick with the same old classical methods? Some reasons:\n",
    "\n",
    "1. Bayesian methods place a lot of structure on the data generation process but also allow one to fully characterize standard errors. Hence, they can be pretty good in small samples. \n",
    "2. Bayesian methods allow one to say something about things like correlation across outcomes that are typically not observable.\n",
    "3. Bayesian methods also generate some things like individual-level treatment effects just as a by-product of estimation.\n",
    "4. Bayesian methods avoid optimization and instead draw samples from the distribution of parameters implied by the data. This can be a great advantage in situations in which optimization is difficult.\n",
    "5. Bayesian methods allow inclusion of prior information or beliefs about parameters. One area in which this is relevant in the treatment context: beliefs about independence of outcomes and treatment.\n",
    "\n",
    "\n",
    "# The basic model and theory\n",
    "\n",
    "There are treatment and control groups, and a set of covariates. The model is essentially a switching regression, where agents select into different groups in part based on things we observe about them and maybe on some things we do not. \n",
    "\n",
    "Treatment is a dichotomous variable $z = \\{0,1\\}$, and we observe $z=1$ if some underlying latent variable $z^*$, is greater than zero, $z=0$ otherwise. \n",
    "\n",
    "We have a Probit-like model selection into treatment  :\n",
    "$$\n",
    "z^* = \\eta W + \\epsilon_z\n",
    "$$\n",
    "And \n",
    "$$ \\begin{array}{ccc}\n",
    "z = 1 & \\textrm{if} & z^* = \\eta W + e_z > 0 \\\\\n",
    "z = 0 & \\textrm{otherwise} &\n",
    "\\end{array}\n",
    "$$\n",
    "where we assume that $e_z \\sim N(0,1)$. \n",
    "\n",
    "Outcome equations differ based on whether or not the individual is treated. If $z=1$, the following equation explains the outcome $y$:\n",
    "$$\n",
    "y_1 = X\\beta_1 + e_1, \\quad z=1, \\quad e_1 \\sim N(0,\\sigma_1^2)\n",
    "$$\n",
    "$$\n",
    "y_0 = X\\beta_0 + e_0, \\quad z=0 \\quad e_0 \\sim N(0,\\sigma_0^2)\n",
    "$$\n",
    "\n",
    "## Point of interest\n",
    "\n",
    "In observational data, we never actually observe both outcomes for a given individual. I.e., we don't see what happens to a patient if he or she both takes and does not take a medication or we don't see what happens if someone both does and does not join a union. But a Bayesian method allows us to consider this as well - the idea is to treat the unobserved outcome as yet another latent variable that is estimated/simulated along with everything else. \n",
    "\n",
    "The variance matrix for $y_1,y_0$, and $z$ is:\n",
    "\n",
    "$$\n",
    "\\Sigma = \\left[\n",
    "\\begin{array}{ccc}\n",
    "\\sigma_1^2  & \\sigma_{10} & \\sigma_{1z} \\\\\n",
    "                                   & \\sigma_0^2 & \\sigma_{0z} \\\\\n",
    "                                   &      &  1 \n",
    "\\end{array}\n",
    "\\right]\n",
    "$$\n",
    "We have to also assume that $\\sigma_z=1$ because of the indeterminacy of the scale parameter in a probit model. If this model is to be estimated using classical methods, one usually has to assume that $\\sigma_{10}=0$, as there is no basis for estimating it - one has to observe both $y_1$ and $y_0$. But this is never the case in observational data (we don't see what happens to a work when they are in a union and not in a union!). \n",
    "\n",
    "## Key Idea: Gibbs sampling\n",
    "\n",
    "The critical tool used in estimation is Gibbs sampling. Suppose we have a multivariate distribution $f(x_1,x_2)$ we wish to sample from. Gibbs sampling says that we can draw from this distribution conditionally and sequentially to get draws from the overall joint distribution. \n",
    "\n",
    "That is, we can take $x_2$ given, and then draw $x_1$ from the conditional distribution $f(x_1|x_2)$. Now that we have an $x_1$, we can draw $x_2$ from $f(x_2|x_1)$. Now we have another value of $x_2$, so we can get another draw $x_1$ from the new $f(x_1|x_2)$ and so on. It can be shown (but it's not that easy!) that this method eventually produces draws $(x_1,x_2)$ from the joint distribution $f(x_1,x_2)$. \n",
    "\n",
    "This tool is more useful than one might think at first glance because a lot of applications have the property that $f(x_1,x_2)$ is very complicated, but $f(x_1|x_2)$ and $f(x_2|x_1)$ are extremely simple."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Methods of estimation\n",
    "\n",
    "## Description of a Gibbs sampler\n",
    "\n",
    "If we wanted to write the full likelihood of the data, it might consist of three parts: a probit style likelihood describing treatment or not, and then a likelihood for the outcome conditional on treatment. This is tricky in the current setting because we have a trivariate normal distribution, and what can be thought of as two latent variables - the threshold variable for treatment, and the value of whatever is unobserved. This is tough (but not impossible) - Heckman and coauthors have some papers describing methods for dealing with these problems. But we will go directly to the Gibbs sampler."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Gibbs sampler\n",
    "\n",
    "To design a Gibbs sampler, one just has to think about the conditional distribution of one thing when everything else is given. Heuristically, my idea is to proceed as follows:\n",
    "\n",
    "1. Given $\\beta,\\eta,\\Sigma, X,y_0$, draw values for $y_1$. That is, draw a treatment outcome for those who were not treated (z = 0). This is fairly simple, as $y$ is univariate normal conditional on everything else (all other data and parameters) is given.\n",
    "\n",
    "2. Given $\\beta,\\eta,\\Sigma, X,y_1$, draw values for $y_0$. That is, draw a non-treated outcome for those who were treated (z=1). Same as previous: univariate normal.\n",
    "\n",
    "3. Given $\\eta,\\Sigma, W,y_0,y_1,z$, draw values for $z^*$, the unobserved latent variable describing selection into treatment. $z^*$ is univariate truncated normal; we require $z^*>0$ if treated, $z^*<0$ if not. \n",
    "\n",
    "4. Given $\\Sigma, X,y_0,y_1,z$ draw values for $\\beta_0$. $\\beta_0$ is multivariate normal, basically in the same fashion as the typical regression.\n",
    "\n",
    "5. Given $\\Sigma,X,y_0,y_1,z$ draw values for $\\beta_1$. Same as 4.\n",
    "\n",
    "6. Draw values for the 5 terms in $\\Sigma$. This is a bit more complicated, and we use a Markov chain Monte Carlo method for this.  \n",
    "\n",
    "Almost everything here just requires a normal distribution. Number 6 is a little more complicated, but we can use a Markov-chain Monte carlo Metropolis-within-Gibbs step for that. The key formula needed is that of the conditional normal. If we want to know the distribution of some subset $y_1$ given $y_2$, when mean vectors are $\\mu_1, \\mu_2$ and the variance matrix is $\\Sigma$, we use:\n",
    "\n",
    "\\begin{equation*}\n",
    "\\Sigma = \\left[ \\begin{array}{c|c}\n",
    "\\Sigma_{11} & \\Sigma_{12} \\\\\n",
    "\\hline\n",
    "\\Sigma_{21} & \\Sigma_{22}\n",
    "\\end{array}\\right]\n",
    "\\end{equation*}\n",
    "\n",
    "The conditional mean of $y_1$ given $y_2$ is:\n",
    "\\begin{equation*}\n",
    "E[y_{1|2}] = \\mu_1 + \\Sigma_{12}\\Sigma_{22}^{-1}(y_2 - \\mu_2) \n",
    "\\end{equation*}\n",
    "and the conditional variance of $y_1$ given $y_2$ is:\n",
    "\\begin{equation*}\n",
    "V[y_{1|2}] = \\Sigma_{11} - \\Sigma_{12}\\Sigma_{22}^{-1}\\Sigma_{21} \n",
    "\\end{equation*}\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Step 1:**\n",
    "\n",
    "Use $\\mu_1=X\\beta_1$, $\\mu_2=X\\beta_0, W\\eta$, $y_2=y_0, z$, $\\Sigma_{12} = \\Sigma_{21}' =\\left[\\sigma_{10},\\sigma_{1z}\\right]$,\n",
    "and\n",
    "\\begin{equation*}\n",
    "\\Sigma_{22}=\\left[\\begin{array}{cc}\\sigma_0^2 & \\sigma_{0z} \\\\ \\sigma_{0z} & 1 \\end{array} \\right]\n",
    "\\end{equation*}\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Step 2:**\n",
    "\n",
    "Use $\\mu_0=X\\beta_0$, $\\mu_2=[X\\beta_1,W\\eta]$, $y_2=[y_1, z]$, $\\Sigma_{12} = \\Sigma_{21}' =\\left[\\sigma_{10},\\sigma_{0z}\\right]$,\n",
    "and\n",
    "\\begin{equation*}\n",
    "\\Sigma_{22}=\\left[\\begin{array}{cc}\\sigma_1^2 & \\sigma_{1z} \\\\ \\sigma_{1z} & 1 \\end{array} \\right]\n",
    "\\end{equation*}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Step 3: **\n",
    "\n",
    "Use $\\mu_1 = W\\eta$, $\\mu_2=[X\\beta_1,X\\beta_0]$, $y_2=[y_1, y_0]$, $\\Sigma_{12} = \\Sigma_{21}' =\\left[\\sigma_{1z},\\sigma_{0z}\\right]$,\n",
    "and\n",
    "\\begin{equation*}\n",
    "\\Sigma_{22}=\\left[\\begin{array}{cc}\\sigma_1^2 & \\sigma_{10} \\\\ \\sigma_{10} & \\sigma_0^2 \\end{array} \\right]\n",
    "\\end{equation*}\n",
    "\n",
    "But note that draws must be from a truncated random normal variable, where if $z=1$, data is left-truncated at 0, and if $z=0$ data is right-truncated at 0."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** Step 4/5/6: **\n",
    "\n",
    "We now have all the data that we need and all we have to do is transform the dependent variables so that they are purged of cross equation correlation. That is, we remove cross-variable correlation from $y_1,y_0,$ and $z$ and then observe that:\n",
    "\\begin{equation*}\n",
    "\\beta_1 \\sim N\\left[ (X'X)^{-1}X'\\hat{y}_1, \\sigma_{1}^2(X'X)^{-1} \\right]\n",
    "\\end{equation*}\n",
    "Similarly:\n",
    "\\begin{equation*}\n",
    "\\beta_0 \\sim N\\left[ (X'X)^{-1}X'\\hat{y}_0, \\sigma_{0}^2(X'X)^{-1} \\right]\n",
    "\\end{equation*}\n",
    "and finally\n",
    "\\begin{equation*}\n",
    "\\eta \\sim N\\left[ (W'W)^{-1}W'\\hat{z}, (W'W)^{-1}\\right]\n",
    "\\end{equation*}\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To provide a bit more detail about what $\\hat{y}_1$ looks like, for example, we see that when $y_1$ is being drawn, the $\\Sigma$ matrix partitions as follows:\n",
    "\n",
    "\\begin{equation*}\n",
    "\\Sigma = \\left[ \\begin{array}{c|c}\n",
    "\\sigma_1^2 & \\begin{array}{cc} \\sigma_{10} & \\sigma_{1z} \\end{array} \\\\\n",
    "\\hline\n",
    "\\begin{array}{c}\n",
    "\\sigma_{10} \\\\\n",
    "\\sigma_{1z} \\end{array}& \\begin{array}{cc} \\sigma_{0}^2 & \\sigma_{0z} \\\\ \\sigma_{0z} & 1 \\end{array}\n",
    "\\end{array}\\right]\n",
    "\\end{equation*}\n",
    "\n",
    "the formula for $\\hat{y}_1$ is:\n",
    "\n",
    "\\begin{equation*}\n",
    "\\hat{y}_1 = X\\beta_1 +\\Sigma_{12}\\Sigma_{22}^{-1}*(y_0-X\\beta_0, z-\\eta W)\n",
    "\\end{equation*}\n",
    "\n",
    "But since we have: $\\Sigma_{12} = \\left[\\sigma_{10}, \\sigma_{1z}\\right]$, and can compute\n",
    "\n",
    "\\begin{equation*}\n",
    "\\Sigma_{22}^{-1} = \\left[\\begin{array}{cc} \\frac{1}{\\sigma_0^2-\\sigma_{0z}^2}& -\\frac{\\sigma_{0z}}{\\sigma_0^2-\\sigma_{0z}^2}\\\\ -\\frac{\\sigma_{0z}}{\\sigma_0^2-\\sigma_{0z}^2}&\\frac{\\sigma_0^2}{\\sigma_0^2-\\sigma_{0z}^2} \\end{array} \\right]\n",
    "\\end{equation*}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Typical Metropolis-Hastings draw\n",
    "\n",
    "The basic idea behind a Metropolis-Hastings draw is to look a ratio of two values of a function, and then accept based on this. That is, suppose we have a given value of $x$, $x_0$. We have some measure of how likely $x_0$ is, say $f(x_0)$. This value may be imcomplete in the sense that is is not a correct density that integrates to one and all that. In fact, it can be almost anything! Now, by some means, we produce a draw $x_1$. The MH algorithm says we should compute the ratio:\n",
    "\n",
    "\\begin{equation*}\n",
    "\\alpha = \\frac{f(x_0)}{f(x_1)}\n",
    "\\end{equation*}\n",
    "\n",
    "and then accept the draw $x_1$ as the new value of $x$ with probability $r$. Mechanically, we draw a uniform random variable $u$ and accept if $u<r$. Otherwise, we just keep $x_0$ as the given draw. It's weird, but it works (again, not all that easy to show)!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Detailed Example\n",
    "\n",
    "We will work with a stock Stata data set and use the `ipystata` interface. Here, we use the classic union data set and try to get a feel for the impact of union membership on wages. So, in this case, our treatment variable is `union` and our outcome variable is the wage. We shall use the variable `south` as an instrument (it will be in the equation predicting union membership but not in the wage equation), but it is worth asking: what happens if we don't believe that our instrument is all that good?  \n",
    "\n",
    "Initially, we are going to take it very slow and worry about how initial draws are produced, and think about how each step works. At the end of this, we are going to write a program to a `.do` file that will speed things up a little bit. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import ipystata"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "(National Longitudinal Survey.  Young Women 14-26 years of age in 1968)\n",
      "(449 observations deleted)\n",
      "(34 observations deleted)\n",
      "\n",
      "Contains data from http://www.stata-press.com/data/r13/union3.dta\n",
      "  obs:         1,210                          National Longitudinal Survey.  Young Women 14-26 years of age in 1968\n",
      " vars:            24                          11 Mar 2013 09:47\n",
      " size:        55,660                          \n",
      "---------------------------------------------------------------------------------------------------------------------------------------------------\n",
      "              storage   display    value\n",
      "variable name   type    format     label      variable label\n",
      "---------------------------------------------------------------------------------------------------------------------------------------------------\n",
      "idcode          int     %8.0g                 NLS ID\n",
      "year            byte    %8.0g                 interview year\n",
      "birth_yr        byte    %8.0g                 birth year\n",
      "age             byte    %8.0g                 age in current year\n",
      "race            byte    %8.0g      racelbl    race\n",
      "msp             byte    %8.0g                 1 if married, spouse present\n",
      "nev_mar         byte    %8.0g                 1 if never married\n",
      "grade           byte    %8.0g                 current grade completed\n",
      "collgrad        byte    %8.0g                 1 if college graduate\n",
      "not_smsa        byte    %8.0g                 1 if not SMSA\n",
      "c_city          byte    %8.0g                 1 if central city\n",
      "south           byte    %8.0g                 1 if south\n",
      "ind_code        byte    %8.0g                 industry of employment\n",
      "occ_code        byte    %8.0g                 occupation\n",
      "union           byte    %8.0g                 1 if union\n",
      "wks_ue          byte    %8.0g                 weeks unemployed last year\n",
      "ttl_exp         float   %9.0g                 total work experience\n",
      "tenure          float   %9.0g                 job tenure, in years\n",
      "hours           int     %8.0g                 usual hours worked\n",
      "wks_work        int     %8.0g                 weeks worked last year\n",
      "ln_wage         float   %9.0g                 ln(wage/GNP deflator)\n",
      "wage            double  %10.0g                real wage\n",
      "black           float   %9.0g                 race black\n",
      "smsa            byte    %8.0g                 1 if SMSA\n",
      "---------------------------------------------------------------------------------------------------------------------------------------------------\n",
      "Sorted by: idcode  year\n",
      "     Note: Dataset has changed since last saved.\n"
     ]
    }
   ],
   "source": [
    "%%stata\n",
    "clear all \n",
    "use http://www.stata-press.com/data/r13/union3\n",
    "set more off\n",
    "keep if union != . \n",
    "keep if tenure != .\n",
    "describe \n",
    "set seed 5150"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Simple regressions and a Probit\n",
    "\n",
    "The first thing that we want to do is just run some simple regressions, and a probit for selection into \"treatment\" which is here belonging to a union. In fact, we will take these as the initial values for our Bayesian estimation procedure. So:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "      Source |       SS           df       MS      Number of obs   =       253\n",
      "-------------+----------------------------------   F(5, 247)       =     18.89\n",
      "       Model |  419.610641         5  83.9221281   Prob > F        =    0.0000\n",
      "    Residual |  1097.48439       247  4.44325665   R-squared       =    0.2766\n",
      "-------------+----------------------------------   Adj R-squared   =    0.2619\n",
      "       Total |  1517.09503       252  6.02021838   Root MSE        =    2.1079\n",
      "\n",
      "------------------------------------------------------------------------------\n",
      "        wage |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]\n",
      "-------------+----------------------------------------------------------------\n",
      "         age |   .1562246   .0492065     3.17   0.002     .0593067    .2531425\n",
      "       grade |   .3897226    .062765     6.21   0.000     .2660997    .5133456\n",
      "        smsa |    1.10633   .3384687     3.27   0.001     .4396772    1.772983\n",
      "       black |  -.9184192   .2896539    -3.17   0.002    -1.488926   -.3479126\n",
      "      tenure |   .1521186   .0836497     1.82   0.070    -.0126391    .3168762\n",
      "       _cons |  -2.814175    1.26752    -2.22   0.027    -5.310702   -.3176493\n",
      "------------------------------------------------------------------------------\n",
      "\n",
      "      Source |       SS           df       MS      Number of obs   =       957\n",
      "-------------+----------------------------------   F(5, 951)       =     92.69\n",
      "       Model |  1504.87835         5   300.97567   Prob > F        =    0.0000\n",
      "    Residual |  3088.09024       951   3.2472032   R-squared       =    0.3276\n",
      "-------------+----------------------------------   Adj R-squared   =    0.3241\n",
      "       Total |  4592.96859       956  4.80436045   Root MSE        =     1.802\n",
      "\n",
      "------------------------------------------------------------------------------\n",
      "        wage |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]\n",
      "-------------+----------------------------------------------------------------\n",
      "         age |   .1452795   .0210829     6.89   0.000      .103905    .1866539\n",
      "       grade |   .4552808   .0335395    13.57   0.000     .3894608    .5211008\n",
      "        smsa |    .954457    .133461     7.15   0.000     .6925449    1.216369\n",
      "       black |  -.5250849   .1390976    -3.77   0.000    -.7980586   -.2521111\n",
      "      tenure |   .2277879   .0367481     6.20   0.000     .1556711    .2999047\n",
      "       _cons |  -4.535484   .5849591    -7.75   0.000    -5.683444   -3.387525\n",
      "------------------------------------------------------------------------------\n",
      "\n",
      "Iteration 0:   log likelihood = -620.42714  \n",
      "Iteration 1:   log likelihood = -592.34076  \n",
      "Iteration 2:   log likelihood = -592.15539  \n",
      "Iteration 3:   log likelihood = -592.15536  \n",
      "\n",
      "Probit regression                               Number of obs     =      1,210\n",
      "                                                LR chi2(3)        =      56.54\n",
      "                                                Prob > chi2       =     0.0000\n",
      "Log likelihood = -592.15536                     Pseudo R2         =     0.0456\n",
      "\n",
      "------------------------------------------------------------------------------\n",
      "       union |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]\n",
      "-------------+----------------------------------------------------------------\n",
      "       south |  -.4895032   .0933276    -5.24   0.000    -.6724221   -.3065844\n",
      "       black |   .4397974   .0972261     4.52   0.000     .2492377    .6303572\n",
      "      tenure |   .0997638   .0236575     4.22   0.000      .053396    .1461317\n",
      "       _cons |  -.9679795   .0746464   -12.97   0.000    -1.114284   -.8216753\n",
      "------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "%%stata\n",
    "reg wage age grade smsa black tenure if union == 1\n",
    "mat binit1 = e(b)\n",
    "\n",
    "reg wage age grade smsa black tenure if union == 0\n",
    "mat binit0 = e(b) \n",
    "\n",
    "probit union south black tenure\n",
    "mat einit = e(b)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Steps in Bayesian Estimation\n",
    "\n",
    "Now, we pull all the variables that we need into `Mata`, and some constants to get the full-blown vectors we need, and at the same time, we pull in the initial values from the regressions to be used. We also set logs of variances at zero, and set all covariance terms at 0 too."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ":     st_view(y=., ., \"wage\")\n",
      "\n",
      ":     st_view(tr=., ., \"union\")\n",
      "\n",
      ":     st_view(X=., ., \"age grade smsa black tenure\")\n",
      "\n",
      ":     st_view(W=., ., \"south black tenure\")\n",
      "\n",
      ":     X = X, J(rows(y), 1, 1)\n",
      "\n",
      ":     W = W, J(rows(y), 1, 1)\n",
      "\n",
      ":     b0 = st_matrix(\"binit0\")\n",
      "\n",
      ":     e = st_matrix(\"einit\")\n",
      "\n",
      ":     ne = cols(e)\n",
      "\n",
      ":     lnsd0 = 0\n",
      "\n",
      ":     v10   = 0\n",
      "\n",
      ":     v1t   = 0\n",
      "\n",
      ":     v0t   = 0\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "    st_view(y=., ., \"wage\")\n",
    "    st_view(tr=., ., \"union\")\n",
    "    st_view(X=., ., \"age grade smsa black tenure\")\n",
    "    st_view(W=., ., \"south black tenure\")\n",
    "    X = X, J(rows(y), 1, 1)\n",
    "    W = W, J(rows(y), 1, 1)\n",
    "\n",
    "    b1 = st_matrix(\"binit1\")        \n",
    "    b0 = st_matrix(\"binit0\")\n",
    "    e = st_matrix(\"einit\")\n",
    "\n",
    "    nb = cols(b1)\n",
    "    ne = cols(e)\n",
    "\n",
    "    lnsd1 = 0                      \n",
    "    lnsd0 = 0\n",
    "    v10   = 0\n",
    "    v1t   = 0\n",
    "    v0t   = 0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Some requireed functions \n",
    "\n",
    "### Robust `invnormal` function\n",
    "\n",
    "One further thing we need to do is make a more stable inverse-normal function. This is because if a normal probability winds up being either one or zero, the stock `invnormal` function will return a missing and it is helpful to avoid this possibility in advance, as it can wreak havok with everything.\n",
    "\n",
    "### Joint likelihood function\n",
    "\n",
    "We also create a function that takes error terms $y-\\mu$ and a variance matrix $\\Sigma$, and calculates the joint likelihood of all the data based on these things. We will use this when drawing values for the variance matrix. That is, we will take a proposal value of, say, $\\sigma_{10}$ and calculate $LL(\\Sigma)$ at this value. We will compare this to an existing value using a Metropolis-Hastings step as described above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false,
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ":     real matrix invnormstab(X) {\n",
      ">         XHat = editvalue(XHat, 1, 1e-16 )\n",
      ">     }\n",
      "\n",
      ":     real scalar ln_L(real matrix errs, real matrix Sigma) {\n",
      ">         part2 = -1/2*colsum(rowsum( (errs*invsym(Sigma):*errs)))\n",
      ">         return(part1 + part2 + part3)\n",
      ">     }\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "    real matrix invnormstab(X) {\n",
    "        XHat = editvalue(X, 0, 1e-323)\n",
    "        XHat = editvalue(XHat, 1, 1e-16 )\n",
    "        return(XHat)\n",
    "    }\n",
    "    real scalar ln_L(real matrix errs, real matrix Sigma) {\n",
    "        part1 = -cols(errs)*rows(errs)/2*ln(2*pi())\n",
    "        part2 = -1/2*colsum(rowsum( (errs*invsym(Sigma):*errs)))\n",
    "        part3 = -rows(errs)/2*ln(det(Sigma))\n",
    "        return(part1 + part2 + part3)\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Simple initial values\n",
    "\n",
    "While we understand that one has to compute variances conditionally, to get started we really just need some values of latent variables that are in the ballpark. So, we draw these as follows:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ":     y0Hat = X*b0' + rnormal(rows(y),1,0,1)*exp(lnsd0)\n",
      "\n",
      ":     y1Hat = X*b1' + rnormal(rows(y),1,0,1)*exp(lnsd1)\n",
      "\n",
      ":     y1 = tr:*y + (1 :- tr):*y1Hat\n",
      "\n",
      ":     y0 = (1 :- tr):*y + tr:*y0Hat\n",
      "\n",
      ":     et  = invnormstab( normal(-muz) + (1 :- normal(-muz)):*runiform(rows(muz),1) )\n",
      "\n",
      ":     ent = invnormstab( normal(-muz):*runiform(rows(muz),1) )\n",
      "\n",
      ":     z = muz + et:*tr + ent:*(1 :- tr)\n",
      "\n",
      ":     \n",
      ":     m0 = X*b0'\n",
      "\n",
      ":     m1 = X*b1'\n",
      "\n",
      ":     mt = W*e'\n",
      "\n",
      ":     ey1 = (y1 - m1)\n",
      "\n",
      ":     ey0 = (y0 - m0)\n",
      "\n",
      ":     et  = (z - mt)\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "    y0Hat = X*b0' + rnormal(rows(y),1,0,1)*exp(lnsd0)\n",
    "    y1Hat = X*b1' + rnormal(rows(y),1,0,1)*exp(lnsd1)\n",
    "    y1 = tr:*y + (1 :- tr):*y1Hat\n",
    "    y0 = (1 :- tr):*y + tr:*y0Hat\n",
    "\n",
    "    muz = W*e'\n",
    "    et  = invnormstab( normal(-muz) + (1 :- normal(-muz)):*runiform(rows(muz),1) )\n",
    "    ent = invnormstab( normal(-muz):*runiform(rows(muz),1) )\n",
    "    z = muz + et:*tr + ent:*(1 :- tr)\n",
    "    \n",
    "    m0 = X*b0'\n",
    "    m1 = X*b1'\n",
    "    mt = W*e'\n",
    "    ey1 = (y1 - m1)\n",
    "    ey0 = (y0 - m0)\n",
    "    et  = (z - mt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we make some placeholders for parameters, also put some prior distributions into action. We also calculate the symmetric inverse of $X'X$ and $W'W$, as these things never change so there is no great reason to keep calculating them as we run through the iterations!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ":     b1Hold        = b1\n",
      "\n",
      ":     b0Hold        = b0\n",
      "\n",
      ":     eHold         = e\n",
      "\n",
      ":     sd1Hold       = lnsd1\n",
      "\n",
      ":     sd0Hold       = lnsd0\n",
      "\n",
      ":     v10Hold       = v10\n",
      "\n",
      ":     v1tHold       = v1t\n",
      "\n",
      ":     v0tHold       = v0t\n",
      "\n",
      ":     \n",
      ":     XX = invsym(X'X)\n",
      "\n",
      ":     WW = invsym(W'W)\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "    b1Hold        = b1\n",
    "    b0Hold        = b0\n",
    "    eHold         = e\n",
    "    sd1Hold       = lnsd1\n",
    "    sd0Hold       = lnsd0\n",
    "    v10Hold       = v10\n",
    "    v1tHold       = v1t\n",
    "    v0tHold       = v0t\n",
    "\n",
    "    draws = 1000\n",
    "    \n",
    "    XX = invsym(X'X)\n",
    "    WW = invsym(W'W)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Draws of a latent variable - illustration\n",
    "\n",
    "While draws typically follow what we did above, it is helpful to just see how a typical draw and update goes for our $y_1$ variable. In this case, we have to compute the sub-matrices of $\\Sigma$, calculate conditional means, and then draw. Finally, we will update the error vector. The whole thing looks a little something like this:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ": \n",
      ":     Sig12 = (v10, v1t)\n",
      "\n",
      ":     Sig22 = exp(lnsd0)^2, v0t \\ v0t, 1\n",
      "\n",
      ":     Sig22m1 = invsym(Sig22)\n",
      "\n",
      ":     CV = exp(lnsd1)^2 - Sig12*Sig22m1*Sig12'\n",
      "\n",
      ":     y1Hat = mc1 + rnormal(rows(y),1,0,1)*sqrt(CV)\n",
      "\n",
      ":     y1    = tr:*y + (1 :-tr):*y1Hat\n",
      "\n",
      ":     vb1 = CV*XX\n",
      "\n",
      ":     b1 = mb1 + cholesky(vb1)*rnormal(cols(b1), 1, 0, 1)\n",
      "\n",
      ":     b1 = b1'\n",
      "\n",
      ":     ey1 = (y1 - m1)\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "\n",
    "    Sig12 = (v10, v1t)                                      //Compute Sigma[12] for y1\n",
    "    Sig22 = exp(lnsd0)^2, v0t \\ v0t, 1                      //Compute Sigma[22] for y1\n",
    "    Sig22m1 = invsym(Sig22)                                 //Invert Sigma[22]\n",
    "\n",
    "    CM = rowsum( (Sig12*Sig22m1):*(ey0, et) )               //This is Sigma[12]*S[22]^-1*[(y0,z)-(my0,mz)] in parallel\n",
    "    CV = exp(lnsd1)^2 - Sig12*Sig22m1*Sig12'                //variance of y1 conditional on y0, \n",
    "\n",
    "    mc1   = X*b1' + CM                                         //Mean of y1 conditional on y0,z\n",
    "    y1Hat = mc1 + rnormal(rows(y),1,0,1)*sqrt(CV)           //Drawing y1 from conditional distribution\n",
    "    y1    = tr:*y + (1 :-tr):*y1Hat                         //Replacing y1 with draws but only if y1 not observed\n",
    "\n",
    "    mb1 = XX*X'(y1 - CM)                                    //mean(b1) = (X'X)^{-1}(X'y1) y1 is purged of dependence on y0,z \n",
    "    vb1 = CV*XX                                             //Var(b1)\n",
    "    b1 = mb1 + cholesky(vb1)*rnormal(cols(b1), 1, 0, 1)     //Drawing new b1, given mean and variance\n",
    "    b1 = b1'                                                //row to column vector\n",
    "\n",
    "    m1 = X*b1'                                              //computation of new deviations from mean\n",
    "    ey1 = (y1 - m1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      "                  1              2              3              4              5              6\n",
      "    +-------------------------------------------------------------------------------------------+\n",
      "  1 |   .1525796931    .3966710577    1.151345935   -.8960741013    .1540438274    -2.88723905  |\n",
      "    +-------------------------------------------------------------------------------------------+\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "b1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Typical MCMC - Metropolis-Hastings draw\n",
    "\n",
    "Here is what a draw of a typical variable that we don't know the distribution for might look like. In the case below, we are drawing the log-standard deviation of the outcome among the treated: $ln(\\sigma_1)$. Note that as a practical matter, we need a few things such as a proposal value `prosd1`, a damping paramter `gam`, and a target acceptance rate `asta`. \n",
    "\n",
    "Below is one step in an adaptive MCMC algorithm (for more details, one can consult [Baker, 2015](http://www.stata-journal.com/article.html?article=st0354). The idea is that we are trying to tune the draws so that they are accepted at the rate $a^*$. This is done by increasing the propsal standard deviation whenever a draw is accepted, and decreasing it whenever one is rejected. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ":     prosd1 = 1\n",
      "\n",
      ":     gam   = 1\n",
      "\n",
      ":     asta = .4\n",
      "\n",
      ":     delta = 1\n",
      "\n",
      ":     Sigma    = exp(lnsd1)^2,   v10, v1t \\ v10, exp(lnsd0)^2, v0t \\ v1t, v0t, 1\n",
      "\n",
      ":     SigmaHat = exp(lnsd1Hat)^2,v10, v1t \\ v10, exp(lnsd0)^2, v0t \\ v1t, v0t, 1\n",
      ">         valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
      ">         alpha = min((exp(rat), 1))\n",
      ">             prosd1 = exp(gam*(alpha - asta))*prosd1\n",
      ">     else {\n",
      ">     }\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "    prosd1 = 1                                                                   //Standard dev. for proposal\n",
    "    gam   = 1                                                                       //Damping parameter\n",
    "    asta = .4                                                                    //Acceptance rate target\n",
    "    delta = 1\n",
    "\n",
    "    lnsd1Hat = lnsd1 + rnormal(1,1,0,1)*prosd1                                   //Draw from normal with mean=previous draw\n",
    "    Sigma    = exp(lnsd1)^2,   v10, v1t \\ v10, exp(lnsd0)^2, v0t \\ v1t, v0t, 1   //Constructing existing variance matrix\n",
    "    SigmaHat = exp(lnsd1Hat)^2,v10, v1t \\ v10, exp(lnsd0)^2, v0t \\ v1t, v0t, 1   //Proposed variance matrix with new lnsd1\n",
    "\n",
    "    if ( hasmissing(cholesky(SigmaHat)) == 0 ) {                                 //Reject if new variance mat is not posdef\n",
    "        val    = ln_L((ey1, ey0, et), Sigma)                                     //Compute data likelihood at old Sigma\n",
    "        valHat = ln_L((ey1, ey0, et), SigmaHat)                                  //Compute data likelihood at new Sigma\n",
    "        rat = valHat - val                                                       //Log ratio of two values.\n",
    "        alpha = min((exp(rat), 1))                                               //alpha is actual ratio\n",
    "        if (runiform(1,1,0,1) < alpha) lnsd1 = lnsd1Hat                          //accept draw with prob. alpha\n",
    "            prosd1 = exp(gam*(alpha - asta))*prosd1                              //adjust proposal dist. up or down \n",
    "    }\n",
    "    else {\n",
    "        prosd1 = exp(-asta*gam)*prosd1                                           //adjustment if rejected due to nonposdef\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Proposal parameters\n",
    "\n",
    "The last thing we need are some proposal variances. That is, when drawing new values for $\\sigma_{10}$ say, we need some distribution from which to get these values. We will start with a proposal standard deviation of one and tune it up or down to get a sufficient number of draws. Here are these parameters:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ":     prosd1 = 1\n",
      "\n",
      ":     prosd0 = 1\n",
      "\n",
      ":     prov10 = 1\n",
      "\n",
      ":     prov1t = 1\n",
      "\n",
      ":     prov0t = 1\n",
      "\n",
      ":     draws = 10000\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "    prosd1 = 1\n",
    "    prosd0 = 1\n",
    "    prov10 = 1\n",
    "    prov1t = 1\n",
    "    prov0t = 1\n",
    "    draws = 10000"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "We now have enough to put together a whole loop, and estimate parameters via a full-blown Bayesian approach. First, we make a few placeholders to hold the draws:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The complete loop\n",
    "\n",
    "And here is the complete loop following the Gibbs sampling outlined above. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mata output:\n",
      "\n",
      ">     Sig12 = (v10, v1t)\n",
      ">     Sig22m1 = invsym(Sig22)\n",
      ">     CM = rowsum( (Sig12*Sig22m1):*(ey0, et) )\n",
      "> \n",
      ">     y1Hat = mc1 + rnormal(rows(y),1,0,1)*sqrt(CV)\n",
      ">     \n",
      ">     vb1 = CV*XX\n",
      ">     b1 = b1'\n",
      ">     m1 = X*b1'\n",
      "> \n",
      ">     Sig22 = exp(lnsd1)^2, v1t \\ v1t, 1\n",
      ">     \n",
      ">     CV = exp(lnsd0)^2 - Sig12*Sig22m1*Sig12'\n",
      ">     mc0   = m0 + CM\n",
      ">     y0 = (1 :- tr):*y + tr:*y0Hat\n",
      ">     mb0 = XX*X'(y0 - CM)\n",
      ">     b0 = mb0 + cholesky(vb0)*rnormal(cols(b0), 1, 0, 1)\n",
      "> \n",
      ">     ey0 = (y0 - m0)\n",
      ">     Sig12 = (v1t, v0t)\n",
      ">     Sig22m1 = invsym(Sig22)\n",
      ">     CM = rowsum( (Sig12*Sig22m1):*(ey1, ey0) )\n",
      ">     \n",
      ">     et  = CV*invnormstab( normal(-mct/CV) + (1 :- normal(-mct/CV)):*runiform(rows(mct),1) )\n",
      ">     z = mct + et:*tr + ent:*(1 :- tr)\n",
      ">     meane = WW*W'(z - CM)\n",
      ">     e = meane + cholesky(vare)*rnormal(cols(e), 1, 0, 1)\n",
      ">     \n",
      ">     et = (z - mt)\n",
      ">     gam = 1/i^delta\n",
      ">     lnsd1Hat = lnsd1 + rnormal(1,1,0,1)*prosd1\n",
      ">     SigmaHat = exp(lnsd1Hat)^2,v10, v1t \\ v10, exp(lnsd0)^2, v0t \\ v1t, v0t, 1 \n",
      ">     if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
      ">         valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
      ">         alpha = min((exp(rat), 1))\n",
      ">         prosd1 = exp(gam*(alpha - asta))*prosd1\n",
      ">     else {\n",
      ">     }\n",
      ">     lnsd0Hat = lnsd0 + rnormal(1,1,0,1)*prosd0\n",
      ">     SigmaHat = exp(lnsd1)^2, v10, v1t \\ v10, exp(lnsd0Hat)^2, v0t \\ v1t, v0t, 1 \n",
      ">     if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
      ">         valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
      ">         alpha = min((exp(rat), 1))\n",
      ">         prosd0 = exp(gam*(alpha - asta))*prosd0\n",
      ">     else {\n",
      ">     }\n",
      ">     v10Hat = v10 + rnormal(1,1,0,1)*prov10\n",
      ">     SigmaHat = exp(lnsd1)^2, v10Hat, v1t \\ v10Hat, exp(lnsd0)^2, v0t \\ v1t, v0t, 1 \n",
      ">     if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
      ">         valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
      ">         alpha = min((exp(rat), 1))\n",
      ">         prov10 = exp(gam*(alpha - asta))*prov10\n",
      ">     else {\n",
      ">     }\n",
      "> \n",
      ">     Sigma    = exp(lnsd1)^2, v10, v1t    \\ v10, exp(lnsd0)^2, v0t \\ v1t,    v0t, 1\n",
      "> \n",
      ">         val    = ln_L((ey1, ey0, et), Sigma)\n",
      ">         rat = valHat - val\n",
      ">         if (runiform(1,1,0,1) < alpha) v1t = v1tHat\n",
      ">     }\n",
      ">         prov1t = exp(-asta*gam)*prov1t\n",
      ">     \n",
      ">     Sigma    = exp(lnsd1)^2, v10, v1t \\ v10, exp(lnsd0)^2, v0t    \\ v1t, v0t,    1\n",
      "> \n",
      ">         val    = ln_L((ey1, ey0, et), Sigma)\n",
      ">         rat = valHat - val\n",
      ">         if (runiform(1,1,0,1) < alpha) v0t = v0tHat\n",
      ">     }\n",
      ">         prov0t = exp(-asta*gam)*prov0t\n",
      "> \n",
      ">     b0Hold = b0Hold \\ b0\n",
      ">     \n",
      ">     sd0Hold = sd0Hold \\ lnsd0\n",
      ">     v1tHold = v1tHold \\ v1t\n",
      "> \n",
      "> }\n"
     ]
    }
   ],
   "source": [
    "%%stata --mata\n",
    "for (i=1;i<=10000;i++) {\n",
    "\n",
    "    Sig12 = (v10, v1t)\n",
    "    Sig22 = exp(lnsd0)^2, v0t \\ v0t, 1\n",
    "    Sig22m1 = invsym(Sig22)\n",
    "\n",
    "    CM = rowsum( (Sig12*Sig22m1):*(ey0, et) )\n",
    "    CV = exp(lnsd1)^2 - Sig12*Sig22m1*Sig12'\n",
    "\n",
    "    mc1   = m1 + CM\n",
    "    y1Hat = mc1 + rnormal(rows(y),1,0,1)*sqrt(CV)\n",
    "    y1    = tr:*y + (1 :-tr):*y1Hat\n",
    "    \n",
    "    mb1 = XX*X'(y1 - CM)\n",
    "    vb1 = CV*XX\n",
    "    b1 = mb1 + cholesky(vb1)*rnormal(cols(b1), 1, 0, 1)\n",
    "    b1 = b1'\n",
    "\n",
    "    m1 = X*b1'\n",
    "    ey1 = (y1 - m1)\n",
    "\n",
    "    Sig12 = (v10, v0t)\n",
    "    Sig22 = exp(lnsd1)^2, v1t \\ v1t, 1\n",
    "    Sig22m1 = invsym(Sig22)\n",
    "    \n",
    "    CM = rowsum( (Sig12*Sig22m1):*(ey1, et) )\n",
    "    CV = exp(lnsd0)^2 - Sig12*Sig22m1*Sig12'\n",
    "    \n",
    "    mc0   = m0 + CM\n",
    "    y0Hat = mc0 + rnormal(rows(y),1,0,1)*sqrt(CV)\n",
    "    y0 = (1 :- tr):*y + tr:*y0Hat\n",
    "        \n",
    "    mb0 = XX*X'(y0 - CM)\n",
    "    vb0 = CV*XX\n",
    "    b0 = mb0 + cholesky(vb0)*rnormal(cols(b0), 1, 0, 1)\n",
    "    b0 = b0'\n",
    "\n",
    "    m0 = X*b0'\n",
    "    ey0 = (y0 - m0)\t\n",
    "    \n",
    "    Sig12 = (v1t, v0t)\n",
    "    Sig22 = exp(lnsd1)^2, v10 \\ v10, exp(lnsd0)^2\n",
    "    Sig22m1 = invsym(Sig22)\n",
    "\n",
    "    CM = rowsum( (Sig12*Sig22m1):*(ey1, ey0) )\n",
    "    CV = 1 - Sig12*Sig22m1*Sig12'\n",
    "    \n",
    "    mct = mt + CM\n",
    "    et  = CV*invnormstab( normal(-mct/CV) + (1 :- normal(-mct/CV)):*runiform(rows(mct),1) )\n",
    "    ent = CV*invnormstab( normal(-mct/CV):*runiform(rows(mct),1) )\n",
    "    z = mct + et:*tr + ent:*(1 :- tr)\n",
    "\n",
    "    meane = WW*W'(z - CM)\n",
    "    vare  = CV*WW\n",
    "    e = meane + cholesky(vare)*rnormal(cols(e), 1, 0, 1)\n",
    "    e = e'\n",
    "    \n",
    "    mt = W*e'\n",
    "    et = (z - mt)\n",
    "\n",
    "    gam = 1/i^delta\n",
    "\n",
    "    lnsd1Hat = lnsd1 + rnormal(1,1,0,1)*prosd1\n",
    "    Sigma    = exp(lnsd1)^2,   v10, v1t \\ v10, exp(lnsd0)^2, v0t \\ v1t, v0t, 1\n",
    "    SigmaHat = exp(lnsd1Hat)^2,v10, v1t \\ v10, exp(lnsd0)^2, v0t \\ v1t, v0t, 1 \n",
    "\n",
    "    if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
    "        val    = ln_L((ey1, ey0, et), Sigma)\n",
    "        valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
    "        rat = valHat - val\n",
    "        alpha = min((exp(rat), 1))\n",
    "        if (runiform(1,1,0,1) < alpha) lnsd1 = lnsd1Hat\n",
    "        prosd1 = exp(gam*(alpha - asta))*prosd1\n",
    "    }\n",
    "    else {\n",
    "        prosd1 = exp(-asta*gam)*prosd1\n",
    "    }\n",
    "    \n",
    "    lnsd0Hat = lnsd0 + rnormal(1,1,0,1)*prosd0\n",
    "    Sigma    = exp(lnsd1)^2, v10, v1t \\ v10, exp(lnsd0)^2,    v0t \\ v1t, v0t, 1\n",
    "    SigmaHat = exp(lnsd1)^2, v10, v1t \\ v10, exp(lnsd0Hat)^2, v0t \\ v1t, v0t, 1 \n",
    "\n",
    "    if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
    "        val    = ln_L((ey1, ey0, et), Sigma)\n",
    "        valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
    "        rat = valHat - val\n",
    "        alpha = min((exp(rat), 1))\n",
    "        if (runiform(1,1,0,1) < alpha) lnsd0 = lnsd0Hat\n",
    "        prosd0 = exp(gam*(alpha - asta))*prosd0\n",
    "    }\n",
    "    else {\n",
    "        prosd0 = exp(-asta*gam)*prosd0\n",
    "    }\n",
    "\n",
    "    v10Hat = v10 + rnormal(1,1,0,1)*prov10\n",
    "    Sigma    = exp(lnsd1)^2, v10,    v1t \\ v10,    exp(lnsd0)^2, v0t \\ v1t, v0t, 1\n",
    "    SigmaHat = exp(lnsd1)^2, v10Hat, v1t \\ v10Hat, exp(lnsd0)^2, v0t \\ v1t, v0t, 1 \n",
    "\n",
    "    if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
    "        val    = ln_L((ey1, ey0, et), Sigma)\n",
    "        valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
    "        rat = valHat - val\n",
    "        alpha = min((exp(rat), 1))\n",
    "        if (runiform(1,1,0,1) < alpha) v10 = v10Hat\n",
    "        prov10 = exp(gam*(alpha - asta))*prov10\n",
    "    }\n",
    "    else {\n",
    "        prov10 = exp(-asta*gam)*prov10\n",
    "    }\n",
    "\n",
    "\n",
    "    v1tHat = v1t + rnormal(1,1,0,1)*prov1t\n",
    "    Sigma    = exp(lnsd1)^2, v10, v1t    \\ v10, exp(lnsd0)^2, v0t \\ v1t,    v0t, 1\n",
    "    SigmaHat = exp(lnsd1)^2, v10, v1tHat \\ v10, exp(lnsd0)^2, v0t \\ v1tHat, v0t, 1 \n",
    "\n",
    "    if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
    "        val    = ln_L((ey1, ey0, et), Sigma)\n",
    "        valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
    "        rat = valHat - val\n",
    "        alpha = min((exp(rat), 1))\n",
    "        if (runiform(1,1,0,1) < alpha) v1t = v1tHat\n",
    "        prov1t = exp(gam*(alpha - asta))*prov1t\n",
    "    }\n",
    "    else {\n",
    "        prov1t = exp(-asta*gam)*prov1t\n",
    "    }\n",
    "    \n",
    "    v0tHat = v0t + rnormal(1,1,0,1)*prov0t\n",
    "    Sigma    = exp(lnsd1)^2, v10, v1t \\ v10, exp(lnsd0)^2, v0t    \\ v1t, v0t,    1\n",
    "    SigmaHat = exp(lnsd1)^2, v10, v1t \\ v10, exp(lnsd0)^2, v0tHat \\ v1t, v0tHat, 1 \n",
    "\n",
    "    if ( hasmissing(cholesky(SigmaHat)) == 0 ) {\n",
    "        val    = ln_L((ey1, ey0, et), Sigma)\n",
    "        valHat = ln_L((ey1, ey0, et), SigmaHat)\n",
    "        rat = valHat - val\n",
    "        alpha = min((exp(rat), 1))\n",
    "        if (runiform(1,1,0,1) < alpha) v0t = v0tHat\n",
    "        prov0t = exp(gam*(alpha - asta))*prov0t\n",
    "    }\n",
    "    else {\n",
    "        prov0t = exp(-asta*gam)*prov0t\n",
    "    }\n",
    "\n",
    "    b1Hold = b1Hold \\ b1\n",
    "    b0Hold = b0Hold \\ b0\n",
    "    eHold = eHold \\ e\n",
    "    \n",
    "    sd1Hold = sd1Hold \\ lnsd1\n",
    "    sd0Hold = sd0Hold \\ lnsd0\n",
    "    v10Hold = v10Hold \\ v10\n",
    "    v1tHold = v1tHold \\ v1t\n",
    "    v0tHold = v0tHold \\ v0t\t\n",
    "\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "# Analysis\n",
    "\n",
    "Now that we have draws, what do we do with them? And how do we know they make any sense? A first thing we might do is pull all the draws into Stata, look at distributions, plots, etc. Accordingly:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "already preserved\n",
      "r(621);\n"
     ]
    }
   ],
   "source": [
    "%%stata\n",
    "\n",
    "preserve\n",
    "clear\n",
    "\n",
    "getmata (b1*) = b1Hold\n",
    "getmata (b0*) = b0Hold\n",
    "getmata (e*)= eHold\n",
    "getmata (sd1*) = sd1Hold\n",
    "getmata (sd0*) = sd0Hold \n",
    "getmata (v10*) = v10Hold\n",
    "getmata (v1z*) = v1tHold \n",
    "getmata (v0z*) = v0tHold"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Sum basic summary statistics will see if our results are reasonable:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    Variable |        Obs        Mean    Std. Dev.       Min        Max\n",
      "-------------+---------------------------------------------------------\n",
      "         b11 |      1,001    .1390228    .0161835   .0832467    .172283\n",
      "         b12 |      1,001    .3320835    .0428472   .2511647   .5127502\n",
      "         b13 |      1,001    1.451207    .2224623   .7300177   1.888194\n",
      "         b14 |      1,001   -1.329975    .6246697  -2.318685    -.05129\n",
      "         b15 |      1,001    .0840461    .2367111  -.3149537   .4918711\n",
      "-------------+---------------------------------------------------------\n",
      "         b16 |      1,001    6.881493    3.620512  -3.841752   10.83971\n",
      "\n",
      "binit1[1,6]\n",
      "           age       grade        smsa       black      tenure       _cons\n",
      "y1   .15622459   .38972261   1.1063302  -.91841921   .15211855  -2.8141755\n"
     ]
    }
   ],
   "source": [
    "%%stata\n",
    "sum b1*\n",
    "mat list binit1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    Variable |        Obs        Mean    Std. Dev.       Min        Max\n",
      "-------------+---------------------------------------------------------\n",
      "         b01 |      1,001    .0858018    .0276904    .025864   .1864112\n",
      "         b02 |      1,001    .5172714    .0493536   .3908302   .6178584\n",
      "         b03 |      1,001    1.677852    .2669476   .8070775   2.023189\n",
      "         b04 |      1,001    1.213885    .6434896  -.6862444   1.906479\n",
      "         b05 |      1,001    .8742783    .2115083   .1707931   1.089324\n",
      "-------------+---------------------------------------------------------\n",
      "         b06 |      1,001   -1.435272    .7950539  -5.141699   .2304491\n",
      "\n",
      "binit0[1,6]\n",
      "           age       grade        smsa       black      tenure       _cons\n",
      "y1   .14527947   .45528082   .95445704  -.52508486   .22778787  -4.5354845\n",
      "\n",
      "    Variable |        Obs        Mean    Std. Dev.       Min        Max\n",
      "-------------+---------------------------------------------------------\n",
      "          e1 |      1,001   -31.74949     18.6553  -64.12609  -.2469236\n",
      "          e2 |      1,001    30.38863    17.86487   .2283489   61.29573\n",
      "          e3 |      1,001    7.061067    4.099344   .0949191     14.133\n",
      "          e4 |      1,001    43.61072    24.83539  -.9679795   86.63268\n",
      "\n",
      "einit[1,4]\n",
      "         union:      union:      union:      union:\n",
      "         south       black      tenure       _cons\n",
      "y1  -.48950324   .43979743   .09976384  -.96797947\n"
     ]
    }
   ],
   "source": [
    "%%stata\n",
    "sum b0*\n",
    "mat list binit0\n",
    "\n",
    "sum e*\n",
    "mat list einit"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "UsageError: %%stata is a cell magic, but the cell body is empty."
     ]
    }
   ],
   "source": [
    "%%stata --mata"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python [Root]",
   "language": "python",
   "name": "Python [Root]"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
