{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Stimulus coding with HDDMRegression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: This tutorial is more advanced. If you are just starting you might want\n",
    "to head to the :ref:`demo <chap_demo>` instead.\n",
    "\n",
    "In some situations it is useful to fix the magnitude of parameters\n",
    "across stimulus types while also forcing them to have different\n",
    "directions. For example, an independent variable could influence both\n",
    "the drift rate ``v`` and the response bias ``z``. A specific example is an\n",
    "experiment on face-house discrimination with different difficulty\n",
    "levels, where the drift-rate is smaller when the task is more\n",
    "difficult and where the bias to responding house is larger when the\n",
    "task is more difficult.  One way to analyze the effect of difficulty\n",
    "on drift rate and bias in such an experiment is to estimate one drift\n",
    "rate ``v`` for each level, and a response bias ``z`` such that the bias for\n",
    "houses-stimuli is ``z`` and the bias for face stimuli is ``1-z`` (``z = .5``\n",
    "for unbiased decisions in ``HDDM``).\n",
    "\n",
    "The following example describes how to generate simulated data for\n",
    "such an experiment, how to set up the analysis with ``HDDMRegression``,\n",
    "and compares true parameter values with those estimated with\n",
    "``HDDMRegression``."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Model Recovery Test for HDDMRegression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The test is performed with simulated data for an experiment with one\n",
    "independent variable with three levels (e.g. three levels of\n",
    "difficulty) which influence both drift rate ``v`` and bias ``z``. Responses\n",
    "are \"accuracy coded\", i.e. correct responses are coded ``1`` and incorrect\n",
    "responses ``0``. Further, stimulus coding of the parameter ``z`` is\n",
    "implemented. \"stimulus coding\" of ``z`` means that we want to fit a model\n",
    "in which the magnitude of the bias is the same for the two stimuli,\n",
    "but its direction \"depends on\" the presented stimulus (e.g. faces or\n",
    "house in a face-house discrimination task). Note that this does not\n",
    "mean that we assume that decision makers adjust their bias after\n",
    "having seen the stimulus. Rather, we want to measure response-bias (in\n",
    "favor of face or house) while assuming the same drift rate for both\n",
    "stimuli. We can achieve this for accuracy coded data by modeling the\n",
    "bias as moved towards the correct response boundary for one stimulus\n",
    "(e.g. ``z = .6`` for houses) and away from the correct response boundary\n",
    "for the other stimulus (``1-z = .4`` for faces).\n",
    "\n",
    "First, we need to import the required python modules."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import hddm\n",
    "from patsy import dmatrix  # for generation of (regression) design matrices\n",
    "import numpy as np  # for basic matrix operations\n",
    "from pandas import Series  # to manipulate data-frames generated by hddm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We save the output of stdout to the file ``ModelRecoveryOutput.txt``.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "\n",
    "sys.stdout = open(\"ModelRecoveryOutput.txt\", \"w\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Creating simulated data for the experiment"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we set the number of subjects and the number of trials per level\n",
    "for the simulated experiment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "n_subjects = 10\n",
    "trials_per_level = 150  # and per stimulus"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we set up parameters of the drift diffusion process for the three\n",
    "levels and the first stimulus. As desribed earlier ``v`` and ``z`` change\n",
    "accross levels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "level1a = {\"v\": 0.3, \"a\": 2, \"t\": 0.3, \"sv\": 0, \"z\": 0.5, \"sz\": 0, \"st\": 0}\n",
    "level2a = {\"v\": 0.4, \"a\": 2, \"t\": 0.3, \"sv\": 0, \"z\": 0.6, \"sz\": 0, \"st\": 0}\n",
    "level3a = {\"v\": 0.5, \"a\": 2, \"t\": 0.3, \"sv\": 0, \"z\": 0.7, \"sz\": 0, \"st\": 0}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we generate the data for stimulus A"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_a, params_a = hddm.generate.gen_rand_data(\n",
    "    {\"level1\": level1a, \"level2\": level2a, \"level3\": level3a},\n",
    "    size=trials_per_level,\n",
    "    subjs=n_subjects,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next come the parameters for the second stimulus, where ``v`` is the same\n",
    "as for the first stimulus. This is different for ``z``. In particular:\n",
    "``z(stimulus_b) = 1 - z(stimulus_a)``. As a result, responses are\n",
    "altogether biased towards responding A. Because we use accuracy coded\n",
    "data, stimulus A is biased towards correct responses, and stimulus B\n",
    "towards incorrect responses. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "level1b = {\"v\": 0.3, \"a\": 2, \"t\": 0.3, \"sv\": 0, \"z\": 0.5, \"sz\": 0, \"st\": 0}\n",
    "level2b = {\"v\": 0.4, \"a\": 2, \"t\": 0.3, \"sv\": 0, \"z\": 0.4, \"sz\": 0, \"st\": 0}\n",
    "level3b = {\"v\": 0.5, \"a\": 2, \"t\": 0.3, \"sv\": 0, \"z\": 0.3, \"sz\": 0, \"st\": 0}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we generate the data for stimulus B"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_b, params_b = hddm.generate.gen_rand_data(\n",
    "    {\"level1\": level1b, \"level2\": level2b, \"level3\": level3b},\n",
    "    size=trials_per_level,\n",
    "    subjs=n_subjects,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We add a column to the ``DataFrame`` identifying stimulus A as 1 and stimulus B as 2.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_a[\"stimulus\"] = Series(np.ones((len(data_a))), index=data_a.index)\n",
    "data_b[\"stimulus\"] = Series(np.ones((len(data_b))) * 2, index=data_a.index)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we merge the data for stimulus A and B"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "mydata = data_a.append(data_b, ignore_index=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Setting up the HDDM regression model\n",
    "\n",
    "Next we need to ensure that the bias is ``z`` for one stimulus and ``1-z``\n",
    "for the other stimulus.  This is implemented here for all stimulus A trials\n",
    "and -1 for stimulus B trials. We use the ``patsy`` command ``dmatrix`` to\n",
    "generate such an array from the stimulus column of our simulated data\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def z_link_func(x, data=mydata):\n",
    "    stim = np.asarray(\n",
    "        dmatrix(\"0 + C(s, [[0], [1]])\", {\"s\": data.stimulus.loc[x.index]})\n",
    "    )\n",
    "    # Apply z = (1 - x) to flip them along 0.5\n",
    "    z_flip = stim - x\n",
    "    # The above inverts those values we do not want to flip,\n",
    "    # so invert them back\n",
    "    z_flip[stim == 0] *= -1\n",
    "    return z_flip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(NOTE: earlier versions of this tutorial suggested applying an inverse logit\n",
    "link function to the regression, but this should no longer be used given changes to the prior \n",
    "on the intercept.) \n",
    "Also depending on your python version, the above code may give you errors and you can try this instead:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "def z_link_func(x, data=mydata):\n",
    "    stim = np.asarray(\n",
    "        dmatrix(\n",
    "            \"0 + C(s, [[0], [1]])\",\n",
    "            {\"s\": data.stimulus.loc[x.index]},\n",
    "            return_type=\"dataframe\",\n",
    "        )\n",
    "    )\n",
    "    # Apply z = (1 - x) to flip them along 0.5\n",
    "    z_flip = np.subtract(stim, x.to_frame())\n",
    "    # The above inverts those values we do not want to flip,\n",
    "    # so invert them back\n",
    "    z_flip[stim == 0] *= -1\n",
    "    return z_flip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we set up the regression models for ``z`` and ``v`` and also include the\n",
    "link functions The relevant string here used by ``patsy`` is '1 +\n",
    "C(condition)'. This will generate a design matrix with an intercept\n",
    "(that's what the '1' is for) and two dummy variables for remaining\n",
    "levels. (The column in which the levels are coded has the default name\n",
    "'condition'):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "z_reg = {\"model\": \"z ~ 1 + C(condition)\", \"link_func\": z_link_func}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For ``v`` the link function is simply ``x = x``, because no transformations is\n",
    "needed. [However, you could also analyze this experiment with response\n",
    "coded data. Then you would not stimulus code ``z`` but ``v`` and you would\n",
    "have to multiply the ``v`` for one condition with ``-1``, with a link function\n",
    "like the one for ``z`` above, but with out the additional logit transform\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "v_reg = {\"model\": \"v ~ 1 + C(condition)\", \"link_func\": lambda x: x}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can finally put the regression description for the hddm model\n",
    "together. The general template for this is ``[{'model': 'outcome_parameter ~ patsy_design_string', 'link_func': your_link_function }, {...}, ...]``\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "reg_descr = [z_reg, v_reg]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The last step before running the model is to construct the complete hddm regression model by adding data etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "m_reg = hddm.HDDMRegressor(mydata, reg_descr, include=[\"v\", \"a\", \"t\", \"z\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we start the model, and wait for a while (you can go and get\n",
    "several coffees, or read a paper). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<pymc.MCMC.MCMC at 0x148cb0090>"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m_reg.sample(2000, burn=100)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Comparing generative and recovered model parameters\n",
    "\n",
    "First we print the model stats"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "m_reg.print_stats()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is the relevant output for our purposes (in this case I fit a single subject, ie. I set n_subjects = 1 above)\n",
    "\n",
    "Lets first look at ``v``. For ``level1`` this is just the\n",
    "intercept. The value of ``.283`` is in the ball park of the true value\n",
    "of ``.3``. The fit is not perfect, but running a longer chain might\n",
    "help (we are ignoring sophisticated checks of model convergence for\n",
    "this example here). To get the values of ``v`` for levels 2 and 3, we\n",
    "have to add the respective parameters (``0.077`` and ``.22``) to the\n",
    "intercept value. The resulting values of  are again\n",
    "close enough to the true values of ``.4`` and ``.5``. The ``z_Intercept``\n",
    "value of 0.48 is close tothe true value of ``.5``, and the level 2 and level 3\n",
    "offsets are also close (.48 + .12= 0.6 and .48+.21 = 0.69).   In sum,\n",
    "``HDDMRegression`` easily recovered the right order of the parameters\n",
    "``z``. The recovered parameter values are also close to the true\n",
    "parameter values, and this was only for a single subject fit.\n",
    "Parameter estimates are improved with more subjects. "
   ]
  }
 ],
 "metadata": {
  "interpreter": {
   "hash": "ff3096d2709bbb36a4584c44f6e6ffdb5e175071e94d34047f50b078bfdc1c6d"
  },
  "kernelspec": {
   "display_name": "Python 3.7.7 ('hddmnn_tutorial')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.7 (default, May  6 2020, 04:59:01) \n[Clang 4.0.1 (tags/RELEASE_401/final)]"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
