\documentclass{article} % For LaTeX2e
\usepackage{subfigure, graphicx,times,learning}
\usepackage{nips11submit_e}

\title{Reward Optimization in the Primate Brain: A POMDP Model of Decision Making under Uncertainty}


\author{
Yanping Huang \\
Department of Computer Science and Engineering\\
University of Washington, Seattle, WA 98105 \\
\texttt{huangyp@cs.washington.edu} \\
\And
Rajesh P. N. Rao \\
Department of Computer Science and Engineering\\
University of Washington, Seattle, WA 98105 \\
\texttt{rao@cs.washington.edu} \\
}


\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
\newcommand{\muhat}{\hat{\mu}}
\newcommand{\betarm}{\rm{Beta}}
\newcommand{\binorm}{\rm{Bino}}
%\nipsfinalcopy % Uncomment for camera-ready version

\begin{document}


\maketitle
\begin{abstract}
Behavioral studies involving tasks such as the random dots motion
discrimination task have provided valuable insights into how the brain
makes decisions under uncertainty. Drift diffusion models have been
especially useful in accounting for the psychometric function and mean
response times for correct choices in these tasks. However, to explain
finite response time for zero motion coherence as well as increased
response times for incorrect choices, one is forced to make ad-hoc
assumptions in these models such as a time-dependent collapsing
decision boundary and hypothetical deadlines. We show that such
assumptions are unnecessary when decision making is viewed within the
framework of partially observable Markov decision processes
(POMDPs). We show that the motion discrimination task reduces to the
problems of (1) computing beta-distributed beliefs over the unknown
motion strength from noisy observations, and (2) selecting actions
based on these beliefs to maximize the expected sum of future
rewards. The resulting optimal policy (belief-to-action mapping) can
be shown to be equivalent to a collapsing decision threshold that
governs the switch from evidence accumulation to making a
discrimination decision. We additionally show how prior knowledge can
be incorporated within the POMDP model to explain recently reported
results on the effects of prior probability on decision making.
\end{abstract}

\section{Introduction}
Humans and animals constantly face the problem of estimating unknown
world states and choosing actions based on noisy
observations. Experimental and theoretical studies~\cite{Knill96,
  Zemel98, Rao04, Ma06} have suggested that the brain may implement an
approximate form of Bayesian inference for perception. How actions are
chosen based on such probabilistic representations of hidden state
remains an interesting open question.  Daw and Dayan~\cite{Daw06,
  Dayan08} explored the suitability of decision theoretic and
reinforcement learning models to understanding various neurobiological
experiments. Bogacz and colleagues proposed a model that combines a
traditional decision making model with reinforcement learning
\cite{Bogacz11}. Rao \cite{Rao10} proposed a neural model for decision
making based on the framework of partially observable Markov decision
processes (POMDPs)~\cite{Kaelbling98}. In this paper, we build on
these previous efforts by exploring a POMDP model for the well-known
random dots motion discrimination tasks~\cite{Shadlen01}. We derive an
optimal policy for the task from first principles. We show that the
task reduces to the problems of (1) computing beta-distributed beliefs
over the unknown motion strength from noisy observations, and (2)
selecting actions based on these beliefs to maximize the expected sum
of future rewards. Without making ad-hoc assumptions such as a
hypothetical deadline, a declining threshold that switches between
accumulating evidence and making discrimination decisions emerges
naturally in our model via reward maximization. We present results
comparing the model's predictions to experimental data and show that
the model can explain both reaction time and accuracy data as well as
recent results on the effects of varying prior probability of motion
direction.


\section{Decision making under the POMDP framework}
\subsection{Model Setup}
We model the random dots motion discrimination task using the POMDP
framework. In each trial, the experimenter chooses a fixed motion
strength $c (-1 \ge c \ge +1)$ where $-1$ corresponds to $100\%$
leftward motion (all dots moving leftward) and $+1$ corresponds to
$100\%$ rightward motion (all dots moving rightward). Intermediate
values of $c$ represent a corresponding fraction of dots moving
leftward or rightward. 

Let $n$ denote the number of random dot samples
on the screen. At time $t$, the agent receives noisy measurements $o_t
\in \{0, \ldots, n\}$, corresponding to the number of rightward moving
dots on the screen. Then, $n - o_t$ represents the number of leftward
moving dots at time $t$.  The observation $o_t$ follows a stationary
Binomial distribution, with $\Pr{o_t |\mu} ={n \choose o_t} \mu^{o_t}
(1-\mu)^{n -o_t}$ where the parameter $\mu = \frac{c+1}{2}$ represents
the probability of an individual random dot moving in the rightward
direction (note that $0 \ge \mu \ge 1$). We regard $\mu$ as
the hidden ``world state'' in the POMDP model.  $\mu$ is unknown to
the agent in the experiments, but it is a static constant throughout
the trial. $\mu > 0.5$ indicates that the underlying coherent motion
is rightward. 

The task of deciding the direction of motion of the
coherently moving dots is then equivalent to the task of deciding
whether $\mu$ is greater than $0.5$ or not. We assume the agent
chooses actions based on the ``belief'' state of $\mu$, which is the
posterior probability distribution over $\mu$ given a sequence of
observations $o_{1:t}$:
\begin{eqnarray}
  \label{eq:posterior}
  b_t(\mu) = \frac{\Pr{o_t|\mu}\Pr{\mu|o_{1:t-1}}}{\Pr{o_t}}  = \frac{\mu^{m_R(t)} (1-\mu)^{m_L(t)} \Pr{\mu}}{\prod_{\tau=1}^t \Pr{o_\tau}}
\end{eqnarray}
where $m(t) = n t$, $m_R(t) = \sum_{\tau = 1}^t o_\tau$, and $m_L(t) =
m(t) - m_R(t)$. To facilitate the analysis, we
represent the prior probability $\Pr{\mu}$ as a beta distribution with
parameters $\alpha_0$ and $\beta_0$. The beta distribution is a flexible representation. For example, a uniform
prior can be obtained using $\alpha_0 = \beta_0 = 1$. The posterior
distribution $\Pr{\mu | o_{1:t}}$ can be written as:
\begin{eqnarray}
  \label{eq:betaBelief}
  b_t(\mu) \propto \mu^{m_R + \alpha_0 - 1} (1-\mu)^{m_L + \beta_0 -1} = \betarm[\mu | \alpha = m_R + \alpha_0, \beta = m_L + \beta_0]
\end{eqnarray}
The belief state $b_t$ at time step $t$ thus follows a beta
distribution with two shape parameters $\alpha$ and $\beta$ as defined
above. Consequently, the posterior probability distribution of $\mu$
depends only on the number of rightward and leftward moving dots $m_R$
and $m_L$.

As illustrated in figure~\ref{fig:model}, the agent updates the belief
state after receiving the current observation $o_t$, and chooses one
of the three actions $a \in \{A_R, A_L, A_S\}$, denoting rightward eye
movement, leftward eye movement and sampling (i.e., waiting for one
more observation), respectively. The agent receives different rewards
$R(\mu, a)$ based on the state and the corresponding action. When the
agents makes a correct choice, $i.e.$, a rightward eye movement $A_R$
when $\mu > 0.5$ or a leftward eye movement $A_L$ when $\mu < 0.5$,
the agent receives a positive reward $R_P > 0$. The agent receives a
negative reward (penalty) or nothing $R_N \le 0$ when an incorrect
action is chosen. We further assume that the agent is motivated by
hunger or thirst to make a decision as quickly as possible. This is
modeled using a unit penalty $R_S = -1$ for each random dot sample the
agent observes, representing the cost the agent needs to pay when
choosing the sampling action $A_S$.

Given a belief state $b_t$ determined by $(\alpha,\beta)$ as above,
the goal of the agent is to find an optimal ``policy'' $\pi^*$ that
maximizes the so-called value function $v^{\pi}(\alpha,\beta)$, which
is defined as the expected sum of future rewards given the current belief
state as given by $(\alpha,\beta)$:
\begin{eqnarray}
  \label{eq:valueFunction}
  v^{\pi}(\alpha,\beta) = \E[\sum_{k=1}^{\infty} r_{t+k}] = \E[ \sum_{k=1}^{\infty} r(b_{t+k}, \pi(b_{t+k})) | b_t= ( \alpha, \beta) ]
\end{eqnarray}
where the expectation is taken with respect to all future belief
states $(b_{t+1}, \ldots, b_{t+k}, \ldots)$. A policy
$\pi(\alpha,\beta)$ defines a mapping from a belief state to one of
the available actions $a$.  The reward term $r(\alpha,\beta,a)$
above is the expected reward given the belief state:
\begin{eqnarray}
  \label{eq:rewardGivenBelief}
  r(\alpha,\beta, A_S) &=& n R_S  \\
  r(\alpha,\beta, A_R) &=& \int_{\mu = 0}^1 R(\mu, A_R) \betarm(\mu|\alpha,\beta) d\mu \nonumber  \\
&=&  R_P  \times [1 - I_{0.5}(\alpha, \beta)] + R_N  \times
I_{0.5}(\alpha, \beta) \nonumber\\
   r(\alpha,\beta, A_L) &=&   R_N  \times [1 - I_{0.5}(\alpha, \beta)] + R_P  \times
I_{0.5}(\alpha, \beta)  \nonumber
\end{eqnarray}
where the regularized beta function $I_x(\alpha,\beta) =
\int_{\mu=0}^x Beta(\mu|\alpha,\beta) d\mu$ represents the cumulative
probability function for beta distribution. The above equations can be
interpreted as follows: In belief state $(\alpha,\beta)$, when $A_S$
is selected, the agent receives $n$ more samples at a cost of
$nR_S$. When $A_R$ is selected, the expected reward $r(\alpha,\beta,
A_R)$ depends on the probability density function of the hidden parameter
$\mu$ given belief state $(\alpha$, $\beta)$. With probability $
I_{0.5}(\alpha, \beta)$, the true parameter $\mu$ is less than $0.5$,
making $A_R$ an incorrect decision with penalty $R_N$, and with
probability $1 - I_{0.5}(\alpha, \beta)$, action $A_R$ is correct,
earning the reward $R_P$.  

One standard way~\cite{Kaelbling98} to solve a POMDP is to first
convert it into a Markov Decision Process (MDP) over belief state,
and then apply standard dynamical programming techniques to compute
the value function~\ref{eq:valueFunction}. For the corresponding belief
MDP, we need to define the transition probabilities $T(b_t | b_{t-1}, a_{t-1})$. When $a_{t-1} = A_S$, the belief state
can be updated by combining previous belief state and current
observation, using Bayes' rule:
\begin{eqnarray}
  \label{eq:beliefUpdate}
 T(b_t | b_{t-1}, A_S) &=& \Pr{\alpha', \beta' | \alpha, \beta, A_S} \nonumber \\
&=& \Pr{o_t|\alpha', \beta'} \delta_{\alpha' = \alpha + o_t} \delta_{\beta' = \beta + n - o_t} \quad \quad \mbox{$\forall$ $o_t\in\{0,\ldots, n\}$
}
\end{eqnarray}
where $\delta(.)$ is the Kronecker delta. $\Pr{o_t|\alpha, \beta}$ is
the expected value of the likelihood function $\Pr{o_t|\mu} = \mu$
over the posterior distribution $b_t$
\begin{eqnarray}
  \label{eq:likelihoodGivenBelief}
  \Pr{o_t | \alpha, \beta} =  {n \choose o_t} \E[\mu^{o_t} (1-\mu)^{n- o_t} | \alpha, \beta]  = {n \choose o_t}\frac{\alpha^{o_t} \beta^{n - o_t}}{(\alpha + \beta)^{n}},
\end{eqnarray}
which is a stationary distribution independent of time $t$.  When the
selected action is $A_R$ or $A_L$, the agent stops sampling and makes
an eye movement. To account for such cases, we include an additional
state $(\alpha = -1, \beta = -1)$, representing a zero-reward
termination state (i.e., $r(-1, -1,a) = 0$), indicating the end of a
trial. Formally, the
transition probabilities with respect to the termination state is
defined as $\Pr{\alpha' = \beta' = -1|\alpha, \beta, A_R \lor A_L} =
1$ for all $\alpha$ and $\beta$.  With the time-independent belief
state transition $\Pr{\alpha', \beta' | \alpha, \beta, a}$, the
optimal value $v^*$ and policy $\pi^* = \arg\max_{\pi} v^\pi$ can be
obtained by solving Bellman's equation:
\begin{eqnarray}
  \label{eq:bellman}
 \pi^*(\alpha, \beta)  &=& \argmax_{a \in \{A_L, A_R, A_S\}} [\ r(\alpha, \beta, a) +  \sum_{\alpha', \beta'}\Pr{\alpha', \beta' | \alpha, \beta,a} v^*(\alpha', \beta')] \nonumber \\
  v^*(\alpha, \beta)  &=& \max_a[\ r(\alpha,\beta,a) +  \sum_{\alpha', \beta'}\Pr{\alpha', \beta' | \alpha, \beta ,a} v^*(\alpha', \beta')] 
\end{eqnarray}

The belief state of this POMDP model is parametrized by two parameters
$(\alpha, \beta)$, both of which are in turn functions of two
sufficient statistics $(m_R, m_L)$: the number of rightward and the
number of leftward moving dots encountered. It should be noted that
information about time $t$ is encoded implicitly in the belief state:
$t = \frac{m_R + m_L}{n}$.  Thus, the total number $m = m_R + m_L$ of
random dot samples observed is directly proportional to the elapsed
time.  The belief states as given by ($m_R, m_L$) at time $t$ are subject to the
constraint $m_R + m_L= nt$. Moreover, the one step belief transition
probability matrix $T(b_t|b_{t-1},A_S, n = n_0)$ equals the $n_0$ steps transition matrix $T^{n_0}(b_t | b_{t-1}, A_S, n = 1)$. The solution to the Bellman equations~\ref{eq:bellman} is independent of $n$. Therefore,
unless otherwise mentioned, we consider the most general
scenario where the agent needs to select an action whenever a new random
dot sample is available, $i.e.$, $n = 1$ and $m = t$.

\subsection{Dynamic Programming}
In this section, we apply value iteration to the POMDP defined above
and solve for the optimal value and policy functions.  At first glance,
equation~\ref{eq:valueFunction} corresponds to an infinite horizon
problem without a discount factor, which may lead to an unbounded value.
However, it is easy to show that the optimal value function is always
finite for any $(m_R,m_L)$, a necessary condition for application of
value iteration. Let $\pi_L(m_R,m_L) = A_L$ be a constant policy over
the entire belief state space. Then, we have $v^* \ge v^{\pi_L} \ge R_N$.  In
addition, it is trivial to show that $v^* \le R_P$, where the equality
holds only when the agent knows the true value of $\mu$ before each
trial. Note that one observation costs $R_S$. An agent
following the optimal policy will make a decision after at most
$\frac{R_N - R_P}{R_S}$ steps on average. Thus, there always exists at
least one ``proper'' policy using which the probability of reaching
the termination state after at most finite time steps is always
positive, regardless of the initial state.
\begin{eqnarray}
  \label{eq:properPolicy}
  \Pr{b_{k + t} \neq (-1,-1) | b_{t}}   = 0 \quad \mbox{as $k \to \infty$}.
\end{eqnarray}
That is, the probability of not reaching the termination state after
$k$ time steps diminished to zero as $k$ becomes large.  Consequently,
it can be shown (see~\cite{Bertsekas95a}, Vol II, Section 2.2.1) that
standard dynamic programming techniques~\cite{Bertsekas96, Sutton98}
such as value iteration and policy iteration will yield a solution to
the Bellman equation~\ref{eq:bellman} after at most a finite number of
iterations.  Moreover, although the number of samples in a trial could
be infinite, we are only interested in the optimal decision policy
within some finite number $T$ of observations. From
equation~\ref{eq:properPolicy}, the values of belief states $b_t$ with
$t \le T$ are independent of those of $t = k + T$. As a result, any
modification in the transition probabilities for $b_{t = k+T}$ will
only change the values at belief states $b_t$ with $t$ close to $k +
T$, but have no effect on values at $b_t, t \le T$.  By setting
\begin{eqnarray}
  \label{eq:finitePOMDP}
  \Pr{b_{k + T + 1} = (-1, -1) | b_{k +T}} = 1,
\end{eqnarray}
we obtain an MDP over a finite belief state space: $m_R + m_L \le n(k +
T)$. Values and policies at states $m_R + m_L \le T$ in the original
MDP of infinite belief state can then be approximated by those for the
modified finite state MDP with size $m_R + m_L \le k + T$.

\begin{figure}
  \centering
\subfigure[]{
\includegraphics[scale=0.23]{model.jpg}\label{fig:model}
}
  \subfigure[]{
  \includegraphics[scale=0.045]{value.jpg}\label{fig:learnedValue}
}
  \subfigure[]{
\includegraphics[scale=0.045]{policy.jpg}
\label{fig:learnedPolicy}
}
\caption{(a)In order to solve the POMDP problem, the agent maintains a belief $b_t$ which is a probability distribution over states of the world.  An action  is provided by the learned policy $\pi$
, which maps belief states to actions. (b) Optimal value as a joint function of the ratio $\muhat = \frac{m_R}{m}$ and the total observations $m$. (c) Optimal Policy as a function of $\muhat$ and $m$
. Blue, red, and green dots represents belief states whose optimal actions are $A_L, A_S$ and $A_R$, respectively. Model parameters: $R_P = 1000$, $R_S = -1$, and $R_N = 0$. }
\label{fig:learnedValueAndPolicy}
\end{figure}


Figure~\ref{fig:learnedValue} shows the optimal value function for
$m_R + m_L \le 400$ learned by applying standard value iteration, with
model parameters $k = 2000$, $R_P = 1000$, $R_N = 0$, and $R_S = -1$.
Identical policies are learned for higher values of $k$. This
indicates that the probability of not reaching the termination state
after $k \ge 2000$ samples is effectively zero under the optimal
policy. The $x$-axis of Figure~\ref{fig:learnedValue} represents the
total number of observations $m = m_R+m_L$ encountered thus far, which
encodes the elapsed time in the trial. The $y$-axis represents the
ratio $\muhat = \frac{m_R}{m_R+m_L}$, which is the estimator of the
true parameter $\mu$.  In general, the model predicts a high value
when $\muhat$ is close to $1$ or $0$. This is because at these two
extremes, selecting the appropriate action has a high probability of
receiving a large positive reward $R_P$. On the other hand, for
$\muhat$ near $0.5$, choosing $A_L$ or $A_R$ in these states has a
high chance of ending up with an incorrect decision and a large
negative reward $R_N$ (see \cite{Rao10} for a similar result using a
different model and under the assumption of a deadline).  Thus, belief
states with $m_R \sim m_L$ have a much lower value compared to belief
states with $m_R \gg m_L$ or $m_R \ll m_L$.


Figure~\ref{fig:learnedPolicy} shows the corresponding optimal policy
$\pi*$ as a joint function of $\muhat$ and time $m$. The optimal
policy $\pi*$ partitions the belief space into three regions: $\Pi^R$,
$\Pi^L$, and $\Pi^S$, representing the set of belief states preferring
actions $A_R$, $A_L$ and $A_S$, respectively. Let $\Pi^a_m = \Pi^a
\cap \{m_R, m_L | m_R + m_L = m\}$, for $a \in \{A_R, A_L, A_S\}$.
  Early in a trial, when $m$ is small, the model selects the sampling action $A_S$ regardless
of the value of $\muhat$. This is because for small $m$, the variance
of the point estimator $\muhat(m)$ is high.  For example, even when
$\muhat = 1$ when $m= 2$, the probability that the true $\mu < 0.5$ is
still high.  The sampling action $A_S$ is required to reduce this
variance by accruing more evidence.  As $m$ becomes larger, the
variance of $\muhat$ decreases, and the deviation between $\muhat$ and
the true value of $\mu$ diminishes by the law of large numbers.
Consequently, the agent will pick action $A_R$ even when $\muhat$ is
only slightly above $0.5$. This gradual decrease in the threshold over
time for choosing the overt actions $A_R$ or $A_L$ has been called a
``collapsing'' bound in the decision making
literature~\cite{Latham07,FrazierYu08,Churchland08}. The next section
shows that such a collapsing decision threshold is an emergent
property of the POMDP model and holds for arbitrary model parameters
$R_P > 0, R_N < 0$ and $R_S \le 0$.

\subsection{Properties of optimal policy and value function}
First, we list some general properties of the optimal policy and value
function derived directly from the Bellman equations~\ref{eq:bellman}
(for space considerations, we only sketch the proofs).
\begin{property}
  \label{thm:betaFunction}
   $r(m_R, m_L, A_R) = \frac{m_R+\alpha_0}{m  + \alpha_0 + \beta_0}r(m_R+1, m_L, A_R) + \frac{m_L +\beta_0}{m  + \alpha_0 + \beta_0} r(m_R, m_L + 1, A_R)$.
\end{property}
{\it Proof.} The reward function can be rewritten as $r(m_R,m_L, A_R) = R_P + (R_N - R_P)I_{0.5}(m_R + \alpha_0, m_L+  \beta_0)$, where $I_{0.5}(\alpha,\beta) = \frac{1}{2}^{\alpha+\beta-1} \sum_{i=\alpha}^{\alpha+\beta-1}{\alpha+\beta-1 \choose i}$. It is then easy to show that $I_x(\alpha+1,\beta)  = I_x(\alpha,\beta) - \frac{x^\alpha(1-x)^\beta}{\alpha  B(\alpha,\beta)}$ and  $I_x(\alpha,\beta + 1)  = I_x(\alpha,\beta) + \frac{x^\beta(1-x)^\alpha}{\beta  B(\alpha,\beta)}$. Finally we have $\alpha I_{0.5}(\alpha+1,\beta) + \beta I_{0.5}(\alpha, \beta+1) = (\alpha+\beta)I_x(\alpha,\beta)$. $\Box$
\begin{property}
  \label{thm:AS}
  If $\pi^*(m_R,m_L) = A_S$ and $\pi^*(m_R + 1, m_L) = A_R$, then $\pi^*(m_R, m_L+1) = A_S$
 and $v^*(m_R, m_L+1) - r(m_R, m_L+1, A_R) > \frac{-R_S(m+\alpha_0+\beta_0)}{m_L+\beta_0}$ for $m_R > m_L$.
\end{property}
{\it Proof.} 
 Let $d(m_R, m_L) = v^*(m_R,m_L) - r(m_R,m_L,A_R)$.  From $\pi^*(m_R,m_L) = A_S$, we have $d(m_R,m_L) > 0$
{\small
   \begin{eqnarray*}
d(m_R, m_L) & = & R_S + \sum_{m_R', m_L'}\Pr{m_R', m_L'|m_R,m_L, A_R}v*(m_R',m_L') - l(m_R,m_L,A_R) \\
 &=&  R_S + \frac{m_R+\alpha_0}{m+\alpha_0+\beta_o}d(m_R+1, m_L) +  \frac{m_L + \beta_0}{m+\alpha_0+\beta_o}d(m_R, m_L+1) > 0 
   \end{eqnarray*}
}
Since $\pi^*(m_R + 1, m_L) = A_R$, $d(m_R+ 1, m_L) = 0$. $d(m_R, m_L+1) > \frac{-R_S(m+\alpha_0+\beta_0)}{m_L+\beta_0}>0$. $\Box$
 \begin{property}
 \label{thm:AS2}
    If $\pi^*(m_R,m_L) = A_S$ and $\pi^*(m_R + 1, m_L) = A_R$, we have $\pi^*(m_R-1, m_L) = A_S$ for $m_R > m_L + 1$, and $\alpha_0 + \beta_0$. 
 \end{property}
{\it Proof.} From property~\ref{thm:AS} we have $d(m_R-1, m_L+1) > -2R_S\frac{m_R + \alpha_0}{m_L+\beta_0}$. It follows that $d(m_R-1, m_L) > R_S + \frac{m_L+\beta_0}{m - 1 +\alpha_0+\beta_0} d(m_R-1,m_L+1) > -R_S\frac{m_R - m_L + 1}{m-1 + \alpha_0 + \beta_0} > 0$. $\Box$

 \begin{property}
 \label{thm:AS3}
If $\pi^*(m_R,m_L) = A_S$, then $\pi^*(m_R - 1, m_L + 1) = A_S$ for $m_R > m_L$
 \end{property}
 \begin{property}
 \label{thm:AS4}
If $\pi^*(m_R,m_L) = A_R$, then $\pi^*(m_R + 1, m_L - 1) = A_R$ for $m_R > m_L$
 \end{property}
The above two properties can be shown in a similar manner as property~\ref{thm:AS2}. 
\begin{theorem}
  The decision threshold for $A_R$ is a decreasing function of $m$. 
\end{theorem}
{\it Proof.}   Let $\phi^R(m)$ be the boundary between $\Pi^R(m)$ and $\Pi^S(m)$, the decision threshold for choosing $A_R$, and $phi^L$ be  the decision threshold for choosing $A_L$. If $\pi^*(m_R,m_L) = A_S$ and $\pi^*(m_R + 1, m_L) = A_R$, then from properties~\ref{thm:AS} to \ref{thm:AS4}  we have $\pi^*(m_R, m_L+1) = A_S$ and $\pi^*(m_R-1, m_L) = A_S$. The decision boundary $\phi^R$ for $A_R$ at $m-1$ and $m+1$ is then  $\phi^R(m-1) \ge \frac{m_R}{m-1}$ and  $\phi^R(m+1) = \frac{m_R+1}{m+1}$, respectively. Thus we have $\phi^R(m+1) < \phi^R(m-1)$ for $m > 1$.$\Box$

Since $r(m_R, m_L, A_R) = r(m_L, m_R, A_L)$, we have $v^*(m_R,m_L) = v^*(m_L, m_R)$. Similar properties for $A_L$ hold for $m_L > m_R$. Moreover, the decision threshold for $A_L$ is an increasing function of $m$, and $\phi^L(m) = 1 - \phi^R(m)$.


\section{Model Predictions: Psychometric Function and Reaction Time}
\subsection{Reaction Time Experiments}
We now construct a decision making model under the learned policy
$\pi^*$ for the reaction time version of the motion discrimination
task~\cite{Roitman02} (rather than the fixed duration version). As
illustrated in Figure~\ref{fig:decisionMaking}, the agent maintains a
running average $\muhat_t = \frac{t-1}{t}\muhat_{t-1} +
\frac{1}{t}o_t$ and selects an action based on the optimal policy
$\pi*$.  Upon the arrival of a new observation $o_t$, the agent makes
a rightward or leftward decision and terminates the trial once
$\muhat_t > \phi^R_t$ or $\muhat_t < \phi^L_t$ where $\phi^R_t$ and
$\phi^L_t$ are the decision thresholds as defined above. When
$\muhat_t \in \Pi^S_t$, the agent chooses the sampling action and gets
a new observation $o_{t+1}$. The performance on the task using the optimal
policy $\pi^*$ can be measured in terms of both the accuracy of
direction discrimination (the so-called psychometric function in the
literature), and the reaction time required to reach a decision. In
this section, we derive the expected accuracy and speed of decisions
as a function of stimulus coherence $c$, and compare them to the
experimental psychometric and chronometric functions of a monkey
performing the same task~\cite{Roitman02}.

\begin{figure}[h!]
\subfigure[]{
\includegraphics[scale=0.22]{decisionMaking.png}\label{fig:decisionMaking}
}
\subfigure[]{
  \includegraphics[scale=0.08]{PCRT.jpg}
\label{fig:PCRT}
}
\label{fig:performance}
 \caption{(a) Model of the decision process under optimal policy $\pi^*$. The input to the model is a motion sequence of random dots $o_{1:t}$. 
   (b) Expected psychometric and chronometric functions. Blue solid curve and red dotted curve represent model prediction $RT_R(c)$ and $RT_L(c)$ for $R_P = 100$. Green dashed line represents $RT_R(c)$ for $R_P = 50$. Black and red dots with error bars represent monkey response times for correct and incorrect trials. Data from~\cite{Roitman02}.}
\end{figure}

The sequence of random variables $\{\muhat_1,\muhat_2,\ldots,
\muhat_t\}$ forms a Markov chain with transition probabilities
$\Pr{\muhat_t = \frac{t-1}{t}\muhat_{t-1} + \frac{1}{t} |
  \muhat_{t-1}} = \mu = \frac{c+1}{2}$ and $\Pr{\muhat_t =
  \frac{t-1}{t}\muhat_{t-1} | \muhat_{t-1}} = 1 - \mu$. Let
$\Psi(\muhat_t,S,t|c)$ be the joint probability that the agent keeps
selecting $A_S$ between time $1$ and time $t$: $\Psi(\muhat_t,t|c) =
\Pr{ \muhat_1\in \Pi^S_1, \muhat_2\in \Pi^S_2,\ldots, \muhat_t \in
  \Pi^S_t}$. At $t=1$, the agent will select $A_S$ regardless of
$\muhat_1$ under $\pi^*$, making $\psi(\muhat_1, 1|c) =
\Pr{\muhat_1}$. At $t > 1$, $\Psi(\muhat_t,t|c)$ can be updated
recursively:
\begin{eqnarray}
  \Psi(\muhat_t, t|c) = \sum_{\muhat_{t-1} \in \Pi^S_{t-1}} \Pr{\muhat_t | \muhat_{t-1}} \Psi(\muhat_{t-1}, t-1|c)
\end{eqnarray}

Let $\Pr{t,R|c}$ and $\Pr{t,L|c}$ be the joint probability mass
functions that the agent makes a right or left choice at time $t$,
respectively. These correspond to the probability that the point
estimator $\muhat(t)$ crosses the boundary of $\Pi^R$ or $\Pi^L$
before hitting the opposite boundary at time $t$:
\begin{eqnarray}
  \label{eq:RT_PDF}
  \Pr{t,R|c} &=& \Pr{\muhat_t \in \Pi^R_t , \muhat_{t-1} \in \Pi^S_{t-1}, \ldots, \muhat_{1} \in \Pi^S_{1}|c} \nonumber\\
&=& \sum_{\muhat_t \in \Pi^R_t}\sum_{\muhat_{t-1} \in \Pi^S_{t-1}} \Pr{\muhat_t | \muhat_{t-1}} \Psi(\muhat_{t-1}, t|c) \\
  \Pr{t,L|c} &=&  \sum_{\muhat_t \in \Pi^L_t}\sum_{\muhat_{t-1} \in \Pi^S_{t-1}} \Pr{\muhat_t | \muhat_{t-1}} \Psi(\muhat_{t-1}, t|c) 
\end{eqnarray}
The probabilities of making rightward or leftward eye movement are the
marginal probabilities summing over all possible crossing times:
$\Pr{R|c} = \sum_{t=1}^{\infty} \Pr{t,R|c}$ and $\Pr{L|c} =
\sum_{t=1}^{\infty} \Pr{t,L|c}$. When the underlying motion direction
is rightward, $i.e.$, $c>0$, $\Pr{R|c}$ represents the accuracy of
motion discrimination.  The mean reaction times for correct and error
choices are the expected crossing times over the conditional
probability that the agent makes a decision at time $m$ with decision
$A_R$ and $A_L$, respectively,
\begin{eqnarray}
  \label{eq:meanRT}
  RT_R(c) &=& \sum_{t=1}^{\infty} t \frac{\Pr{t,R|c}}{\Pr{R|c}}\\
  RT_L(c) &=& \sum_{t=1}^{\infty} t \frac{\Pr{t,L|c}}{\Pr{L|c}}
\end{eqnarray}

The left panel of figure~\ref{fig:PCRT} shows performance accuracy as
a function of motion strength $c$ for the model (solid curve) and a
monkey (black dots). The model parameters are the same as those in
figure~\ref{fig:learnedValueAndPolicy}. The right panel of
figure~\ref{fig:PCRT} shows the mean reacton time $RT_R(c)$ for
correct choices as a function of coherence $c$ for the model (solid
curve) and the monkey (black dots). Note that $RT_R(c)$ represents the
expected number of observations for making a rightward eye movement
$A_R$. In order to make a direct comparison to the monkey data
$RT^*_R(c)$, which is in units of time, a linear regression was used
to to determine the duration $a$ of a single observation and the onset
of decision time $b$: $RT^*_R(c) = a * RT_R(c) + b$ for $c = 0.032,
0.064, 0.128, 0.256$ and $0.512$. 

The dotted line in figure~\ref{fig:PCRT} denotes a longer reaction
time for error choices, which is also generally observed in the monkey
data.  Figure~\ref{fig:PCRT} additionally shows that a decrease in
$R_P$ would encourage faster response time and lower accuracy (dashed
curve).  Our model thus provides a quantitative
framework for predicting the effects of reward size $R_P$ on the
accuracy and speed of decision making. As figure~\ref{fig:PCRT}
depicts, the model prediction achieves a close fit to the monkey
data. Note that we did not attempt to quantitatively fit a particular
monkey's data, and no data fitting techniques except a linear search
within $R_p\in\{10,200\}$ with step $10$ were employed to determine the choice of model parameters.

\subsection{Effects of Varying Prior Probability}
\begin{figure}[h!]
  \centering
\subfigure[]{
  \includegraphics[scale=0.06]{prior_wrong.jpg}
}
\subfigure[]{
  \includegraphics[scale=0.06]{prior.jpg}
}
\subfigure[]{
  \includegraphics[scale=0.37]{prior_exp.png}
}
 \label{fig:biased}
 \caption{Psychometric and chronometric functions under biased (dotted) and neutral (solid line) prior knowledge. (a) Model prediction using standard Bayesian combination. (b) Prediction using the model~\ref{eq:priorUpdate}. (c) Monkey data from~\cite{Hanks11}.}
\end{figure}

Decision are often based on a combination of sensory evidence and
prior knowledge about the true state. The standard Bayesian approach
is to initialize the belief over $\mu$ with the prior belief.
However, the prior probability distribution $\textrm{Pr}_0[\mu]$
learned from previous trials may be different from the distribution in
the current trial. Unless one has large amounts of observed samples, a
misleading prior belief has negative effects on inference over
$\mu$~\cite{Gallistel2009}. We therefore propose that the animal
utilizes a mixture model where, with probability $1 - \gamma$, $\mu$ is
drawn from the posterior distribution given observations $o_{1:t}$
defined in equation~\ref{eq:posterior}, and with probability $\gamma$,
$\mu$ is drawn from the learned ``prior'' distribution $\textrm{Pr}_0(\mu)$, i.e.,
\begin{eqnarray}
  \label{eq:priorUpdate}
b_t'(\mu) = \Pr{\mu | o_{1:t}, P_0} = (1 - \gamma) \Pr{\mu|o_{1:t}} + \gamma \textrm{Pr}_0[\mu]
\end{eqnarray}
where the weight $\gamma$ can be regarded as the relative reliability of the
learned prior information.  Similar models for combining prior probabilities
have been proposed by~\cite{Yu09,Fard11}. 

Figure~\ref{fig:biased}(b) shows the model predictions when the prior
probability for rightward direction, $i.e.$,
$\int_{0.5}^1\textrm{Pr}_0[\mu] d\mu$, is $0.8$ (dashed line) and
$0.5$ (solid line), respectively. Model predictions are obtained by
applying value iteration to get the optimal policy on the corresponding POMDP model with belief
update~\ref{eq:priorUpdate}, $\gamma = 0.25$, and other model
parameters the same as those in
Figure~\ref{fig:learnedValueAndPolicy}. Unlike the standard Bayesian
model in Figure~\ref{fig:biased}(a), the model prediction using
equation~\ref{eq:priorUpdate} on the reaction times with a biased
prior exhibits an asymmetric distribution around zero percent
coherence: longer response times when $c < 0$ and shorter response
times when $c > 0$.  This characteristic feature has also been
recently reported in monkey experiments (Figure~\ref{fig:biased}(c)).   
The experimental data in~\cite{Hanks11} is not yet publicly
available and we therefore focus here on a qualitative match. In addition, from equation~\ref{eq:priorUpdate} we have $\muhat_t = \E[\mu|b_t'(\mu)] = (1-\gamma)\frac{m_R(t-1)}{t} + \frac{1-\gamma}{t}o_t + \gamma \E_{\textrm{Pr}_0}[\mu]$. Note that the
relative weight of the prior $ \E_{\textrm{Pr}_0}[\mu]$ compared to
the new evidence $o_t$ is an increasing function of  time, causing
the prior to exert more influence on the decision as time progresses.
This provides a normative explanation for the dynamic bias signal
assumption in~\cite{Hanks11} in which prior probability plays an
increasingly important role in the decision process over time.


\section{Discussion}
Considerable progress has been made in understanding the mechanisms of
decision making using the random dots motion discrimination task. The
drift diffusion model~\cite{Palmer05, Bogacz06} has successfully
provided $descriptive$ accounts of both decision accuracies and mean
reaction times for correct choices. This paper provides a $normative$
account of the monkey's behavior, illustrating how the monkey's
choices can be interpreted as being optimal under the framework of
partially observable Markov decision processes (POMDPs). Our model
predicts psychometric and chronometric functions that are
quantitatively close to those observed in monkeys.  We showed through
analytical derivations and numerical results that the optimal
threshold for selecting overt actions is a declining function of time.
Such a collapsing decision bound has previously been obtained for
decision making under a deadline~\cite{FrazierYu08,Rao10}. It has been
proposed as an ad-hoc assumption in the framework of drift diffusion
models~\cite{Ditterich06, Latham07, Churchland08} for explaining
finite response time at zero percent coherence and longer reaction
times for error trials. Our results demonstrate that a collapsing
bound emerges naturally as a consequence of reward maximation.

The model makes several interesting empirical predictions.  Besides
predicting the effects of reward size on the agent's performance, the
model also demonstrates that the optimal decision threshold depends
directly on the number of sampled observations rather than just the
elapsed time. One could test this prediction by varying the duration
of each sample, e.g., by making the frame rate of the random dots
stimulus time-variant.

Instead of traditional dynamic programming techniques, the optimal
policy $\pi*$ and value $v^*$ can be learned via Monte Carlo
approximation-based methods such as temporal difference (TD)
learning~\cite{Sutton98}.  There is much evidence suggesting that the
firing rate of midbrain dopaminergic neurons might represent the
reward prediction error in TD learning.  Thus, the learning of value
in the current model could potentially be implemented in a manner
similar to previous TD learning models of the basal
ganglia~\cite{Schult97, Dayan08, Rao10, Bogacz11}.  The neural
mechanism for decision making within a single trial could be similar
to that in drift diffusion models: sensory neurons count the number of
rightward and leftward samples received and employ divisive
normalization to maintain the point estimate $\muhat_t =
\frac{m_R}{m_R+m_L}$. The response of LIP neurons would then represent
the difference between $\muhat$ and the optimal decision threshold
$\phi^R(t)$ learned using TD learning. In this model, a rightward eye
movement is initiated only when the LIP response $\muhat_t -
\phi^R(t)$ reaches a fixed bound (in this case, $0$). Such a model
provides a normative explanation for the ``urgency'' signal
\cite{Churchland08} or the ramping behavior seen in LIP neurons even
for zero coherence.


\bibliographystyle{unsrt}
\bibliography{pomdp}

\end{document}
