\documentclass{article} % For LaTeX2e
\usepackage{subfigure, graphicx,times,learning}
\usepackage{nips11submit_e}

\title{Reward Optimization in Primate Brains: A POMDP Model of Random Dots Motion Discrimination}

\author{
Yanping Huang \\
Department of Computer Science and Engineering\\
University of Washington, Seattle, WA 98105 \\
\texttt{huangyp@cs.washington.edu} \\
\And
Rajesh P.N. Rao \\
Department of Computer Science and Engineering\\
University of Washington, Seattle, WA 98105 \\
\texttt{rao@cs.washington.edu} \\
}


\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
\newcommand{\muhat}{\hat{\mu}}
\newcommand{\betarm}{\rm{Beta}}
\newcommand{\binorm}{\rm{Bino}}
%\nipsfinalcopy % Uncomment for camera-ready version

\begin{document}


\maketitle
\begin{abstract}
Behavioral studies involving tasks such as the random dots motion discrimination task have provided valuable insights into how the brain makes decisions based on a combination of sensory evidence and prior knowledge. Drift diffusion models have been especially useful in accounting for the psychometric function and mean response times for correct choices in these tasks. However, to explain finite response time at zero percent coherence as well as response times for incorrect choices, one is forced to invoke ad-hoc assumptions such as a time-dependent collapsing decision boundary and hypothetical deadlines for decisions. We show that such assumptions are unnecessary when decision making is viewed within the framework of partially observable Markov decision processes (POMDPs). We show that the motion discrimination task reduces to the problems of (1) computing beta-distributed beliefs over the unknown motion strength from noisy observations and (2) selecting actions based on these beliefs to maximize the expected sum of future rewards. The resulting optimal policy (belief-to-action mapping) can be shown to be equivalent to a collapsing decision threshold that governs the switch from evidence accumulation to making a discrimination decision. We additionally introduce a probabilistic graphical model for incorporating prior knowledge and show that the model can capture experimentally observed effects of prior probability on decision making. Besides providing a normative framework for understanding decision making in the primate brain, our results suggest a way to interpret response time in terms of the sample complexity of the learning problem, thus establishing a new link between biological and machine learning.
\end{abstract}

\section{Introduction}
Humans and animals constantly face the problems of estimating unknown world states and making plans based on noisy observations. Experimental studies and theoretical models~\cite{Knill96, Zemel98, Rao04, Ma06} have suggested brains implement approximate Bayesian inference during perception. But how brains encode plan actions based on the probabilistic representations of hidden states has remained an open question.  Adopted ideas from previous work of Daw, Dayan, and others~\cite{Daw06, Dayan08, Rao10, Bogacz11} who have explored the use of the reinforcement learning models for explaining various aspects of decision making,  here we propose a  model of decision making based on the theory of partially observable Markov decision processes (POMDPs).~\cite{Kaelbling98} We assume brains maintain a posterior distribution (the “belief” state) over both the hidden world states, and select actions based on the belief state in order to maximize the expected total reward. We illustrate the proposed model by applying it to the random dots motion discrimination tasks~\cite{Shadlen01}. We show how the discrimination tasks can be reduced to the problem of deciding the underlying motion direction from noisy observations, and finding when to make such decision.  A declining threshold that switches between accumulating evidence and making discrimination decisions emerges naturally in the action selection policy via reward maximization, reflecting the need to minimizing hunger or thirst during decision time.


\section{Decision-making under POMDP framework}
\subsection{Model Setup}
We model the random dots discrimination tasks under the POMDP framework. In each trial of the discrimination task, the experimenter chooses a fixed motion strength $c$ and provides the agent with a sequence of random dot motions.  At time $t$, the agent receives noisy measurements $o_t \in \{0, \ldots, n\}$, corresponding to the number of rightward moving dots on the screen. $n$ denotes the number of random dots samples on the screen, and $n - o_t$ represents the number of leftward moving dots at time $t$.  The observation $o_t$ follows a stationary Binomial distribution, with $\Pr{o_t |\mu} ={n \choose o_t} \mu^{o_t} (1-\mu)^{n -o_t}$ where the parameter $\mu = \frac{c+1}{2}$ represents the probability of an individual random dot moving in the rightward direction. We denote $\mu$ as the hidden ``world state'' in the POMDP model.  $\mu$ is unknown to the agent in the experiments, but it is a static constant throughout the trial. $\mu > 0.5$ indicates that the underlying coherent motion is rightward. The task of deciding the direction of motion of the coherently moving dots is then equivalent to the task of deciding whether $\mu$ is greater than $0.5$ or not. We assume the agent chooses actions based on the ``belief'' state of $\mu$, which is the posterior probability distribution over $\mu$ given a sequence of observations $o_{1:t}$:
\begin{eqnarray}
  \label{eq:posterior}
  b_t(\mu) = \frac{\Pr{o_t|\mu}\Pr{\mu|o_{1:t-1}}}{\Pr{o_t}}  = \frac{\mu^{m_R(t)} (1-\mu)^{m_L(t)} \Pr{\mu}}{\prod_{\tau=1}^t \Pr{o_\tau}}
\end{eqnarray}
where $m(t) = n t$, $m_R(t) = \sum_{\tau = 1}^t o_\tau$,  and $m_L(t) = m(t) - m_R(t)$.  For the sake of making the analysis tractable, we represent the prior probability $\Pr{\mu}$ as a beta distribution  with parameters $\alpha_0$ and $\beta_0$. For example,  we use a uniform prior $\alpha_0 = \beta_0 = 1$ throughout this paper. The posterior distribution $\Pr{\mu | o_{1:t}}$ can be written as:
\begin{eqnarray}
  \label{eq:betaBelief}
  b_t(\mu) \propto \mu^{m_R + \alpha_0 - 1} (1-\mu)^{m_L + \beta_0 -1} = \betarm[\mu | \alpha = m_R + \alpha_0, \beta = m_L + \beta_0]
\end{eqnarray}
The belief state $b_t$ at time step $t$  follows a beta distribution with two shape parameters $\alpha$ and $\beta$. Consequently, the posterior probability distribution of $\mu$ depends only on the number of rightward and leftward moving dots $m_R$ and $m_L$. 

As illustrated in figure~\ref{fig:model}, the agent updates the belief state after receiving current observation $o_t$, and chooses one of the three actions $a \in \{A_R, A_L, A_S\}$, denoting rightward eye movement, leftward eye movement and sampling, respectively. The agent receives different  rewards $R(\mu, a)$ based on the belief state and the corresponding action. When the agents makes a correct choice, $i.e.$, making a rightward eye movement $A_R$
when $\mu > 0.5 (c> 0)$ or making a leftward eye movement $A_L$
when $\mu < 0.5 (c < 0)$, the agent receives a positive reward $R_P > 0$. The agent receives a negative penalty or nothing $R_N \le 0$ when an incorrect action is made. We further assume the agent is motivated by hunger or thirst to make a decision as quickly as possible. This is modeled using a unit negative penalty $R_S = -1$ for each random dot sample the agent observed, representing the cost the agent needs to pay when choosing $A_S$.



Given a belief state $b_t = (\alpha,\beta)$, the goal of the agent is to find an optimal ``policy'' $\pi^*$ that maximizes the so-called value function  $v^{\pi}(\alpha,\beta)$, which is defined as the expected sum of future rewards given the belief state $(\alpha,\beta)$:
\begin{eqnarray}
  \label{eq:valueFunction}
  v^{\pi}(\alpha,\beta) = \E[\sum_{k=1}^{\infty} r_{t+k}] = \E[ \sum_{k=1}^{\infty} r(b_{t+k}, \pi(b_{t+k})) | b_t= ( \alpha, \beta) ]
\end{eqnarray}
where the expectation is taken with respect to all future belief state $(b_{t+1}, \ldots, b_{t+k}, \ldots)$. A policy $\pi(\alpha,\beta)$ defines a mapping from the belief state to one of the available actions $a$.  The immediate reward $r(\alpha,\beta,a)$ is the expected reward given the belief state
\begin{eqnarray}
  \label{eq:rewardGivenBelief}
  r(\alpha,\beta, A_S) &=& n R_S  \\
  r(\alpha,\beta, A_R) &=& \int_{\mu = 0}^1 R(\mu, A_R) \betarm(\mu|\alpha,\beta) d\mu \nonumber  \\
&=&  R_P  \times [1 - I_{0.5}(\alpha, \beta)] + R_N  \times
I_{0.5}(\alpha, \beta) \nonumber\\
   r(\alpha,\beta, A_L) &=&   R_N  \times [1 - I_{0.5}(\alpha, \beta)] + R_P  \times
I_{0.5}(\alpha, \beta)  \nonumber
\end{eqnarray}
where the regularized beta function $I_x(\alpha,\beta) = \int_{\mu=0}^x Beta(\mu|\alpha,\beta) d\mu$ represents the cumulative probability function for beta distribution. At belief state $(\alpha,\beta)$, the agent will receive $n$ more samples when $A_S$ is selected, resulting a $nR_S$ reward. When $A_R$ is selected, the expected reward $r(\alpha,\beta, A_R)$ depends on the probability density function of hidden parameter $\mu$ given belief state $(\alpha$, $\beta)$. With probability $ I_{0.5}(\alpha, \beta)$,  the true parameter $\mu$ is less than $0.5$, making $A_R$ an incorrect decision with penalty $R_N$; while with probability $1 - I_{0.5}(\alpha, \beta)$, action $A_R$ is a correct one that earns reward $R_P$.  In addition, the expected reward at the termination state is always zero, $l(-1, -1,a) = 0$.

One standard way~\cite{Kaelbling98} to solve a POMDP is by first converting it into a Markov decision process (MDP) over the belief state, and then applying standard dynamical programming techniques to compute the value function~\ref{eq:valueFunction}. In the corresponding belief MDP,  the state transition probabilities  $T(b_t | b_{t-1}, a_{t-1})$ also depend on previous action. When $a_{t-1} = A_S$, the belief state can be updated by combining previous belief state and current observation, using Bayes' rule.
\begin{eqnarray}
  \label{eq:beliefUpdate}
 T(b_t | b_{t-1}, A_S) &=& \Pr{\alpha', \beta' | \alpha, \beta, A_S} \nonumber \\
&=& \Pr{o_t|\alpha', \beta'} \delta_{\alpha' = \alpha + o_t} \delta_{\beta' = \beta + n - o_t} \quad \quad \mbox{$\forall$ $o_t\in\{0,\ldots, n\}$
}
\end{eqnarray}
where $\delta(.)$ is the Kronecker delta, and 
$\Pr{o_t|\alpha, \beta}$ is the expected value of the likelihood function $\Pr{o_t|\mu} = \mu$ over the posterior distribution $b_t$
\begin{eqnarray}
  \label{eq:likelihoodGivenBelief}
  \Pr{o_t | \alpha, \beta} =  {n \choose o_t} \E[\mu^{o_t} (1-\mu)^{n- o_t} | \alpha, \beta]  = {n \choose o_t}\frac{\alpha^{o_t} \beta^{n - o_t}}{(\alpha + \beta)^{n}},
\end{eqnarray}
which also is a stationary distribution independent of time $t$.
When the selected action is $A_R$ or $A_L$, the agent stops sampling and starts to make eye movement. To account for such cases, we include an additional state $(\alpha = -1, \beta = -1)$, representing a zero-reward termination state, indicating the end of a trial when actions $A_R$ or $A_L$ is selected. Formally, the transition probabilities with respect to the termination state is defined as $\Pr{\alpha' = \beta' = -1|\alpha, \beta, A_R \lor A_L}  = 1$ for any $\alpha$ and $\beta$.   With the time-independent belief state transition $\Pr{\alpha', \beta' | \alpha, \beta, a}$, the optimal value $v^*$ and  policy $\pi^* = \arg\max_{\pi} v^\pi$ can be obtained by solving the below Bellman's equation:
\begin{eqnarray}
  \label{eq:bellman}
 \pi^*(\alpha, \beta)  &=& \argmax_{a \in \{A_L, A_R, A_S\}} [\ r(\alpha, \beta, a) +  \sum_{\alpha', \beta'}\Pr{\alpha', \beta' | \alpha, \beta,a} v^*(\alpha', \beta')] \nonumber \\
  v^*(\alpha, \beta)  &=& \max_a[\ r(\alpha,\beta,a) +  \sum_{\alpha', \beta'}\Pr{\alpha', \beta' | \alpha, \beta ,a} v^*(\alpha', \beta')] 
\end{eqnarray}


The belief state of this POMDP model can be parametrized by two parameters $(\alpha, \beta)$, both of which in turn are functions of two sufficient statistic $(m_R, m_L)$: the number of rightward and leftward moving dots encountered. It should be noted the time information $t$ is encoded implicitly in the belief state: $t = \frac{m_R + m_L}{n}$.  The total number of random dots samples  observed $m$ is directly proportional to the  elapsed time.  The belief states ($m_R, m_L$) at time $t$ are subject to the constraint $m_R + m_L= nt$. Moreover, the one step belief transition probability matrix $T(b_t|b_{t-1},n = n_0)$  equals the $n_0$-step transition matrix $T^{n_0}(b_t | b_{t-1}, n = 1)$. The solution to bellman equations~\ref{eq:bellman} is independent of $n$. Therefore, unless otherwise mentioned, in this paper we consider the most general scenario that the agent needs to select action whenever a new random dot sample is available, $i.e.$, $n  = 1$ and $m = t$. 

\subsection{Dynamic Programming}
In this section, we apply value iteration to the POMDP defined above and solve the optimal value and policy functions.  At first glance, equation~\ref{eq:valueFunction} corresponds to an infinite horizon problem without a discount factor, which may lead to unbounded value.  However, it is easy to show that the optimal value function is always finite for any $(m_R,m_L)$, a necessary condition for application of value iteration. Let $\pi_N(m_R,m_L) = A_L$ be a constant policy over entire belief state space, we have $v^* \ge v^{\pi_N} \ge R_N$.  In addition, it is trivial to show that $v^* \le R_P$, where the equality holds only when the agent knows the true value of $\mu$ before each trial. Note that one observation consumes $R_S$ reward, an agent following the optimal policy will make a decision after at most $\frac{R_N - R_P}{R_S}$ steps on average. Thus, there always exists at least one ``proper'' policy using which the probability of reaching the termination state after at most finite time steps is always positive, regardless the initial state.
\begin{eqnarray}
  \label{eq:properPolicy}
  \Pr{b_{k + t} \neq (-1,-1) | b_{t}}   = 0 \quad \mbox{as $k \to \infty$}.
\end{eqnarray}
That is, the probability of not reaching the termination state after $k$ time steps diminished to zero as $k$ becomes large.
Consequently, it can be shown (see~\cite{Bertsekas95a}, Vol II, Section 2.2.1) that standard dynamic programming techniques~\cite{Bertsekas96, Sutton98} such as value iteration and policy iteration will yield  a solution to the Bellman equation~\ref{eq:bellman} after at most finite iterations. 
 Moreover, although the number of samples  in a trial could be infinite, we are only interested in the optimal decision policy within some finite $T$ observations. From equation~\ref{eq:properPolicy}, the values of belief states $b_t$ with $t \le T$
are independent of those of $t = k + T$. As a result, any modification in the transition probabilities for  $b_{t = k+T}$ will only change the values at belief states $b_t$ with $t$ close to $k + T$, but have no effect on values at  $b_t, t \le T$ at all.  By setting 
\begin{eqnarray}
  \label{eq:finitePOMDP}
  \Pr{b_{k + T + 1} = (-1, -1) | b_{k +T}} = 1,
\end{eqnarray}
we obtain a  MDP of finite belief state space: $m_R + m_L \le  n(k + T)$. Values and policies at states $m_R + m_L \le T$ in the original MDP of infinite belief state  can then be approximated by those for the modified finite state MDP with size $m_R + m_L \le k + T$. 

\begin{figure}
  \centering
\subfigure[]{
\includegraphics[scale=0.23]{model.jpg}\label{fig:model}
}
  \subfigure[]{
  \includegraphics[scale=0.045]{value.jpg}\label{fig:learnedValue}
}
  \subfigure[]{
\includegraphics[scale=0.045]{policy.jpg}
\label{fig:learnedPolicy}
}
\caption{(a)In order to solve the POMDP problem, the agent maintains a belief $b_t$ which is a probability distribution over states of the world.  An action  is provided by the learned policy $\pi$
, which maps belief states to actions. (b) Optimal value as a joint function of the ratio $\muhat = \frac{m_R}{m}$ and the total observations $m$. (c) Optimal Policy as a function of $\muhat$ and $m$
. Blue, red, and green dots represents belief states whose optimal actions are $A_L, A_S$ and $A_R$, respectively. Model parameters: $R_P = 1000$, $R_S = -1$, and $R_N = 0$. }
\label{fig:learnedValueAndPolicy}
\end{figure}


Figure~\ref{fig:learnedValue} shows the optimal value function for $m_R + m_L \le 400$ learned by applying standard value iteration, with model parameters $k = 2000$, $R_P = 1000$, $R_N = 0$, and $R_S = -1$.  Identical policy is learned with higher values of $k$. This indicates that the probability of not reaching the termination state after $k \ge 2000$ samples is effectively zero under the optimal policy. The $x$-axis of figure~\ref{fig:learnedValue} represents the number of total observations $m = m_R+m_L$ encountered thus far, which encodes the elapsed time in the trial. The $y$-axis represents the ratio $\muhat = \frac{m_R}{m_R+m_L}$, which is the estimator of the true parameter $\mu$.  In general, the model predicts a high value when $\muhat$ is close to $1$ or $0$. This is because at these two extremes, selecting the appropriate action has a high probability of receiving a large positive reward $R_P$. On the other hand, for $\muhat$ near $0.5$, choosing $A_L$ or $A_R$ in these states has a high chance of ending up with an incorrect decision and a large negative reward $R_N$.  Thus belief states of $m_R \sim m_L$ has a much lower value compared to belief states of $m_R \gg m_L$ or $m_R \ll m_L$.


Figure~\ref{fig:learnedPolicy} shows the corresponding optimal policy $\pi*$ as a joint function of $\muhat$ and time $m$. The optimal policy $\pi*$ partitions the belief state space into three regions: $\Pi^R$, $\Pi^L$, and $\Pi^S$, representing the set of belief states preferring actions $A_R$, $A_L$ and $A_S$, respectively. Let $\Pi^a_m = \Pi^a \cap \{m_R, m_L | m_R + m_L = m\}$, for $a \in \{A_R, A_L, A_S\}$.  The boundary between $\Pi^R$ ($\Pi^L$) and $\Pi^S$ determine the decision threshold for choosing $A_R$ ($A_L$).  At the early stage of the trial when $m$ is small,  the model selects $A_S$ regardless the value $\muhat$. This is because at small $m$, the variance of the point estimator $\muhat(m)$ is high.   For example,  even when $\muhat = 1$ when $m= 2$, the probability that the true $\mu < 0.5$ is still high.  The ``sample'' action $A_R$ is required to reduce this variance by allowing more evidence to be collected.  As $m$ becomes larger, the variance of $\muhat$ decreased. The deviation between $\muhat$ and the true value of $\mu$ diminishes due to the law of large number.  Consequently, the agent will pick action $A_R$ even $\muhat$ is slightly above $0.5$. This gradual decrease in the threshold  for choosing action $A_R$ or $A_L$ over time has been called a ``collapsing'' bound in the decision making literature~\cite{Latham07,FrazierYu08,Churchland08}. The next section we will show such a shrinking decision threshold is an emergent property of the POMDP model that holds for arbitrary model parameters $R_P > 0, R_N < 0$ and $R_S \le 0$. 

\subsection{Properties of optimal policy and value function}
First we list some general properties of optimal policy and value function derived directly from Bellman equations~\ref{eq:bellman},  with sketches of proof.
\begin{property}
  \label{thm:betaFunction}
   $r(m_R, m_L, A_R) = \frac{m_R+\alpha_0}{m  + \alpha_0 + \beta_0}r(m_R+1, m_L, A_R) + \frac{m_L +\beta_0}{m  + \alpha_0 + \beta_0} r(m_R, m_L + 1, A_R)$.
\end{property}
{\it Proof.} The reward function could be rewritten as $r(m_R,m_L, A_R) = R_P + (R_N - R_P)I_{0.5}(m_R + \alpha_0, m_L+  \beta_0)$, where $I_{0.5}(\alpha,\beta) = \frac{1}{2}^{\alpha+\beta-1} \sum_{i=\alpha}^{\alpha+\beta-1}{\alpha+\beta-1 \choose i}$. It is then easy to show that $I_x(\alpha+1,\beta)  = I_x(\alpha,\beta) - \frac{x^\alpha(1-x)^\beta}{\alpha  B(\alpha,\beta)}$ and  $I_x(\alpha,\beta + 1)  = I_x(\alpha,\beta) + \frac{x^\beta(1-x)^\alpha}{\beta  B(\alpha,\beta)}$. Finally we have $\alpha I_{0.5}(\alpha+1,\beta) + \beta I_{0.5}(\alpha, \beta+1) = (\alpha+\beta)I_x(\alpha,\beta)$. $\Box$
\begin{property}
  \label{thm:AS}
  If $\pi^*(m_R,m_L) = A_S$ and $\pi^*(m_R + 1, m_L) = A_R$, then $\pi^*(m_R, m_L+1) = A_S$
 and $v^*(m_R, m_L+1) - r(m_R, m_L+1, A_R) > \frac{-R_S(m+\alpha_0+\beta_0)}{m_L+\beta_0}$ for $m_R > m_L$.
\end{property}
{\it Proof.} 
 Let $d(m_R, m_L) = v^*(m_R,m_L) - r(m_R,m_L,A_R)$.  From $\pi^*(m_R,m_L) = A_S$, we have $d(m_R,m_L) > 0$
{\small
   \begin{eqnarray*}
d(m_R, m_L) & = & R_S + \sum_{m_R', m_L'}\Pr{m_R', m_L'|m_R,m_L, A_R}v*(m_R',m_L') - l(m_R,m_L,A_R) \\
 &=&  R_S + \frac{m_R+\alpha_0}{m+\alpha_0+\beta_o}d(m_R+1, m_L) +  \frac{m_L + \beta_0}{m+\alpha_0+\beta_o}d(m_R, m_L+1) > 0 
   \end{eqnarray*}
}
Since $\pi^*(m_R + 1, m_L) = A_R$, $d(m_R+ 1, m_L) = 0$. $d(m_R, m_L+1) > \frac{-R_S(m+\alpha_0+\beta_0)}{m_L+\beta_0}>0$. $\Box$
 \begin{property}
 \label{thm:AS2}
    If $\pi^*(m_R,m_L) = A_S$ and $\pi^*(m_R + 1, m_L) = A_R$, we have $\pi^*(m_R-1, m_L) = A_S$ for $m_R > m_L + 1$, and $\alpha_0 + \beta_0$. 
 \end{property}
{\it Proof.} From property~\ref{thm:AS} we have $d(m_R-1, m_L+1) > -2R_S\frac{m_R + \alpha_0}{m_L+\beta_0}$. It follows that $d(m_R-1, m_L) > R_S + \frac{m_L+\beta_0}{m - 1 +\alpha_0+\beta_0} d(m_R-1,m_L+1) > -R_S\frac{m_R - m_L + 1}{m-1 + \alpha_0 + \beta_0} > 0$. $\Box$

 \begin{property}
 \label{thm:AS3}
If $\pi^*(m_R,m_L) = A_S$, then $\pi^*(m_R - 1, m_L + 1) = A_S$ for $m_R > m_L$
 \end{property}
 \begin{property}
 \label{thm:AS4}
If $\pi^*(m_R,m_L) = A_R$, then $\pi^*(m_R + 1, m_L - 1) = A_R$ for $m_R > m_L$
 \end{property}
The above two properties can be shown in a similar way as property~\ref{thm:AS2}. 
\begin{theorem}
  The decision threshold for $A_R$ is a decreasing function of $m$. 
\end{theorem}
{\it Proof.}     If $\pi^*(m_R,m_L) = A_S$ and $\pi^*(m_R + 1, m_L) = A_R$, from properties~\ref{thm:AS} to \ref{thm:AS4}  we have $\pi^*(m_R, m_L+1) = A_S$ and $\pi^*(m_R-1, m_L) = A_S$. The decision boundary for $A_R$ at $m-1$ and $m+1$ is then  $\phi^R(m-1) \ge \frac{m_R}{m-1}$ and  $\phi^R(m+1) = \frac{m_R+1}{m+1}$, respectively. Thus we have $\phi^R(m+1) < \phi^R(m-1)$ for $m > 1$.$\Box$

Since $r(m_R, m_L, A_R) = r(m_L, m_R, A_L)$, we have $v^*(m_R,m_L) = v^*(m_L, m_R)$. Similar properties for $A_L$ hold for $m_L > m_R$. Moreover, the decision threshold for $A_L$ is an increasing function of $m$, and $\phi^L(m) = 1 - \phi^R(m)$.


\section{Model predictions on psychometric function and response time}
\subsection{Reaction time experiments}
Here we construct a decision making model under the learned policy $\pi^*$ for the reaction time version of motion discrimination task~\cite{Roitman02}. As illustrated in figure~\ref{fig:decisionMaking}, the agent maintains a running average $\muhat_t = \frac{t-1}{t}\muhat_{t-1} + \frac{1}{t}o_t$ and selects action based on the optimal policy $\pi*$.  Upon the arrival of a new observation $o_t$, the agent makes a rightward or leftward decision and terminate the trial once $\muhat_t > \phi^R_t$ or $\muhat_t < \phi^L_t$. When $\muhat_t \in \Pi^S_t$, the agent will keep sampling and wait for new observation $o_{t+1}$. The performance on task using optimal policy $\pi^*$ can be measured in terms of both the accuracy of direction discrimination (the so-called psychometric function in the literature), and the response time required to reach a decision. In this section, we derive the expected accuracy and speed of decisions as a function of coherence of stimulus $c$, and compare them to the experimental psychometric and chronometric functions of a monkey performing the same task~\cite{Roitman02}.

\begin{figure}[h!]
\subfigure[]{
\includegraphics[scale=0.22]{decisionMaking.png}\label{fig:decisionMaking}
}
\subfigure[]{
  \includegraphics[scale=0.08]{PCRT.jpg}
\label{fig:PCRT}
}
\label{fig:performance}
 \caption{(a) Model of decision process under optimal policy $\pi^*$. The input to the model is a sequence of random dots motion $o_{1:t}$. 
   (b) Expected psychometric and chronometric functions. Blue solid curve and red dotted curve represent model prediction $RT_R(c)$ and $RT_L(c)$ for $R_P = 100$. Green dashed line represent $RT_R(c)$ for $R_P = 50$. Black and red dots with error bars represents the monkey response times of correct and incorrect trials. Data from~\cite{Roitman02}.}
\end{figure}

The sequence of random variables  $\{\muhat_1,\muhat_2,\ldots, \muhat_t\}$ forms a Markov chain with transition probabilities $\Pr{\muhat_t = \frac{t-1}{t}\muhat_{t-1} + \frac{1}{t} | \muhat_{t-1}} = \mu = \frac{c+1}{2}$ and $\Pr{\muhat_t = \frac{t-1}{t}\muhat_{t-1} | \muhat_{t-1}} = 1 - \mu$. Let $\Psi(\muhat_t,S,t|c)$ be the joint probability that the agent keeps selecting $A_S$ between time $1$ and time $t$,  $\Psi(\muhat_t,t|c) = \Pr{ \muhat_1\in \Pi^S_1, \muhat_2\in \Pi^S_2,\ldots, \muhat_t \in \Pi^S_t}$. At $t=1$, the agent will select $A_S$ regardless of $\muhat_1$ under $\pi^*$, making $\psi(\muhat_1, 1|c) = \Pr{\muhat_1}$. At $t > 1$, $\Psi(\muhat_t,t|c)$ can be updated recursively:
\begin{eqnarray}
  \Psi(\muhat_t, t|c) = \sum_{\muhat_{t-1} \in \Pi^S_{t-1}} \Pr{\muhat_t | \muhat_{t-1}} \Psi(\muhat_{t-1}, t-1|c)
\end{eqnarray}

Let $\Pr{t,R|c}$ and $\Pr{t,L|c}$ be the joint probability mass function that the agent makes a right or left choice at time $t$, respectively. They correspond to the probability that the point estimator $\muhat(t)$ crosses the boundary of $\Pi^R$ or $\Pi^L$ before hitting the opposite boundary at time $t$:
\begin{eqnarray}
  \label{eq:RT_PDF}
  \Pr{t,R|c} &=& \Pr{\muhat_t \in \Pi^R_t , \muhat_{t-1} \in \Pi^S_{t-1}, \ldots, \muhat_{1} \in \Pi^S_{1}|c} \nonumber\\
&=& \sum_{\muhat_t \in \Pi^R_t}\sum_{\muhat_{t-1} \in \Pi^S_{t-1}} \Pr{\muhat_t | \muhat_{t-1}} \Psi(\muhat_{t-1}, t|c) \\
  \Pr{t,L|c} &=&  \sum_{\muhat_t \in \Pi^L_t}\sum_{\muhat_{t-1} \in \Pi^S_{t-1}} \Pr{\muhat_t | \muhat_{t-1}} \Psi(\muhat_{t-1}, t|c) 
\end{eqnarray}
The probabilities of making rightward and leftward eye movement are the marginal probability summing over all possible crossing time:
$\Pr{R|c} = \sum_{t=1}^{\infty} \Pr{t,R|c}$ and $\Pr{L|c} = \sum_{t=1}^{\infty} \Pr{t,L|c}$. When the underlying motion direction is rightward, $i.e.$, $c>0$, $\Pr{R|c}$ represents the accuracy of motion discrimination.  The mean response times for correct and error choices are the expected crossing time over the conditional probability that the agent makes a decision at $m$ giving that the decision is $A_R$ and $A_L$, respectively,
\begin{eqnarray}
  \label{eq:meanRT}
  RT_R(c) &=& \sum_{t=1}^{\infty} t \frac{\Pr{t,R|c}}{\Pr{R|c}}\\
  RT_L(c) &=& \sum_{t=1}^{\infty} t \frac{\Pr{t,L|c}}{\Pr{L|c}}
\end{eqnarray}

The left panel of figure~\ref{fig:PCRT} shows performance accuracy as a function of motion strength $c$ for the model(solid curve) and a monkey(black dots). The model parameters are chosen the same as those in figure~\ref{fig:learnedValueAndPolicy}. The right panel of figure~\ref{fig:PCRT} shows the mean response time $RT_R(c)$
of correct choices as a function of coherence $c$ for the model (solid curve) as well as the monkey(black dots). Note that $RT_R(c)$ represents the expected number of observations for making a rightward eye movement $A_R$. In order to make a direct comparison to the monkey data $RT^*_R(c)$, which is in the unit of time, a linear regression was used to to determine the duration of a single observation $a$ and the onset of decision time $b$: $RT^*_R(c) = a * RT_R(c) + b$ for $c = 0.032, 0.064, 0.128, 0.256$ and $0.512$. Dotted line in figure~\ref{fig:PCRT} exhibits longer response time of error choices, which is generally observed in the monkey data.   Figure~\ref{fig:PCRT} also showed that a decrease in $R_P$
would encourage faster response time and lower accuracy (dashed curve). Such changes in the speed-accuracy regime due to adjusted  reward size of correct choices were also reported in previous experimental studies~\cite{Hanks11}. Our model provides a quantitative framework for predicting the effects of reward size $R_P$ on the accuracy and speed of decision making. As figure~\ref{fig:PCRT} depicts, the model prediction achieves a close fit to  the monkey data. Note that we did not attempt to quantitatively fit a particular monkey's data, no data fitting techniques except a linear search within $R_p\in\{10,200\}$ with step $10$ was employed to determine the choice of model parameters.

\subsection{Combination of sensory evidence with prior information}
\begin{figure}[h!]
  \centering
\subfigure[]{
  \includegraphics[scale=0.06]{prior_wrong.jpg}
}
\subfigure[]{
  \includegraphics[scale=0.06]{prior.jpg}
}
\subfigure[]{
  \includegraphics[scale=0.37]{prior_exp.png}
}
 \label{fig:biased}
 \caption{Psychometric and chronometric functions at the presence of biased (dotted) and neutral(solid line) prior knowledge. (a): model prediction using standard Bayesian combination. (b): model prediction using model~\ref{eq:priorUpdate}. (c): monkey data from~\cite{Hanks11}.}
\end{figure}
Decision are often based on a combination of sensory evidence and  prior information of true state. The standard Bayesian way of combination is to initialize the belief over $\mu$ with the prior belief.  However, the prior probability distribution $\textrm{Pr}_0[\mu]$ learned from previous trials may be different from the distribution in current trial. Unless one has large amounts of observed samples, a misleading prior belief has negative effects on inference over $\mu$.~\cite{Gallistel2009} Alternatively, we construct a  mixture model where with probability $1 - \gamma$, $\mu$ is drawn from the posterior distribution given observations $o_{1:t}$ defined in equation~\ref{eq:posterior}, and probability $\gamma$, $\mu$ is redrawn from the ``prior'' distribution $\textrm{Pr}_0(\mu)$.
\begin{eqnarray}
  \label{eq:priorUpdate}
b_t'(\mu) = \Pr{\mu | o_{1:t}, P_0} = (1 - \gamma) \Pr{\mu|o_{1:t}} + \gamma \textrm{Pr}_0[\mu]
\end{eqnarray}
where the weight $\gamma$ represents the relative reliability of the prior information.  Similar models of combining prior probabilities have been proposed by~\cite{Yu09,Fard11}.

Figure~\ref{fig:biased}(b) depicts model predictions when the prior probability of an underlying rightward direction, $i.e.$, $\int_{0.5}^1\textrm{Pr}_0[\mu] d\mu$, is $0.8$ (dashed line) and $0.5$ (solid line), respectively. Model predictions are obtained by applying value iteration on the corresponding POMDP model with belief update~\ref{eq:priorUpdate}, $\gamma = 0.25$, and other model parameters  the same as those in figure~\ref{fig:learnedValueAndPolicy}. Unlike the standard Bayesian model shown in figure~\ref{fig:biased}(a), the model prediction using equation~\ref{eq:priorUpdate} on the response times with biased prior exhibits an asymmetric distribution about zero percent coherence: longer response times when $c < 0$ and shorter response times when $c > 0$.  This characteristic feature is also shown in the monkey data (figure~\ref{fig:biased}(c)).  Note the experimental data in~\cite{Hanks11} is not yet available for public access, here we focus on qualitative matches. 


\section{Discussion}
Considerable progress has been made in understanding the detailed mechanisms of decision making in the random dots motion discrimination tasks. The drifted diffusion models~\cite{Palmer05, Bogacz06} have successfully provided $descriptive$ accounts of both monkey's decision accuracies and mean response times of making correct choices. In order to provide a normative account of $why$ the monkey's behavior is optimal, we formalized under the framework of partially observable Markov decision processes (POMDPs) and analyzed the optimal decision making strategy. Our model exhibits psychometric and chronometric functions that are quantitative close to those in monkeys.  We showed through analytical and numerical results that the optimal threshold for selecting overt actions is a declining function of time.
Such a collapsing decision bound has been shown as the optimal policy for decision making under the pressure of an assumed stochastic deadline~\cite{FrazierYu08}, and has also been proposed as an ad-hoc assumption in the framework of drift diffusion model~\cite{Ditterich06, Latham07, Churchland08}  for explaining finite response time at zero percent coherence and longer decision time for error trials.  

Besides dynamic programming techniques, the optimal policy $\pi*$ and value $v^*$ can be learn via Monte Carlo approximation-based methods such as temporal-difference (TD) learning~\cite{Sutton98}.  There is much evidence showing that the firing rate of dopaminergic neurons represents the reward prediction error which is the difference between the expected reward and the obtained reward at the end of each discrimination trial.  This suggests a future neural implementation of value learning that utilized previous TD learning models of the Basal Ganglia~\cite{Schult97, Dayan08, Bogacz11, Rao10}.  The neural mechanism for decision making within a single trial is similar to that in drifted diffusion model. Sensory neurons count the number rightward and leftward motions thus far and employ division normalization to maintain a point estimator $\muhat_t = \frac{m_R}{m_R+m_L}$. The response in the LIP neurons then represents the  difference between $\muhat$ and the optimal decision threshold $\phi^R(t)$ learned from TD-learning over previous trials. The rightward eye movement is initiated only when the LIP response $\muhat_t - \phi^R(t)$ reaches the fixed bound $0$. 

This work makes several interesting empirical predictions.  The model makes quantitative predictions on how the reward size of correct choices affects the agent's performance.  The model also demonstrates that the optimal decision threshold depends directly on the number of observations rather than the amount of elapsed time. One could verify this prediction by varying the duration of each sample, i.e., by making the frame rate of random dots time-variant. By interpreting the decision making problems as learning tasks of inferring the true parameter, our model shows an optimal way for balancing the trade-off between sample size and learning accuracy, thus establishing a new bridge between biological and machine learning.


\bibliographystyle{unsrt}
\bibliography{pomdp}

\end{document}
