\documentclass[letterpaper]{article}
\usepackage{proceed2e}
\usepackage{times}
\usepackage{helvet}
\usepackage{courier}
\usepackage{epsfig,subfigure}
\usepackage{amsmath,amsfonts,amssymb,amsthm}
\usepackage{array}
% Import some more mathematical symbols
\usepackage{amsmath,amssymb}

% Import an algorithm formatting package
\usepackage[vlined,algoruled,titlenumbered,noend]{algorithm2e}

% Define argmin, argmax
\def\argmax{\operatornamewithlimits{arg\max}}
\def\argmin{\operatornamewithlimits{arg\min}}
\def\supmax{\operatornamewithlimits{arg\sup}}

% Strikeout
%\usepackage{ulem}
%\normalem

% Define a fourth level subheading (Scott)
\newcommand{\subfour}{\vspace*{3mm}\hspace{-2mm}}

% Define a command for extended commenting (Scott)
\long\def\COMMENT#1\ENDCOMMENT{\message{(Commented text...)}\par}

% Define common macros
\input{Macros}

\begin{document}
% The file aaai.sty is the style file for AAAI Press 
% proceedings, working notes, and technical reports.
%
\title{Symbolic Dynamic Programming for Discrete and Continuous State MDPs}

\author{Anonymous}
%\author{Scott Sanner\\
%NICTA \& the ANU\\
%Canberra, Australia\\
%{\tt ssanner@nicta.com.au}
%\And
%Karina Valdivia Delgado\\
%University of Sao Paulo\\
%Sao Paulo, Brazil\\
%{\tt kvd@ime.usp.br}
%\And
%Leliane Nunes de Barros\\
%University of Sao Paulo\\
%Sao Paulo, Brazil\\
%{\tt leliane@ime.usp.br}
%}
\maketitle

\begin{abstract}
Many real-world decision-theoretic planning problems can be naturally
modeled with discrete and continuous state Markov decision processes
(DC-MDPs).  While previous work has addressed automated
decision-theoretic planning for DC-MDPs, \emph{optimal} solutions have
only been defined so far for limited settings, e.g., DC-MDPs having
\emph{hyper-rectangular piecewise linear value functions}.  In this
work, we extend symbolic dynamic programming (SDP) techniques to
provide optimal solutions for a vastly expanded class of DC-MDPs.
To address the inherent combinatorial aspects of SDP, we introduce the
XADD --- a continuous variable extension of the algebraic decision
diagram (ADD) --- that maintains compact representations
of the exact value function.  Empirically, we demonstrate an
implementation of SDP with XADDs on various DC-MDPs, showing the
\emph{first optimal automated solutions} to DC-MDPs with
\emph{linear and nonlinear piecewise partitioned value functions} and
showing the advantages of constraint-based pruning for XADDs.  
%
%decision-list and tree-structured
%no work appears to have
%addressed planning with \emph{continuous action} spaces without
%requiring some form of approximation such as discretization or
%sampling.  In this work, we propose a symbolic dynamic programming
%algorithm for the solution of CSA-MDPs motivated by the connection
%between first-order MDPs and CSA-MDPs. For finite-horizon planning
%with linear action dynamics, this approach admits a closed-form
%analytical solution.  The result is an exact algorithm for
%finite-horizon planning in factored or first-order CSA-MDPs that
%empirically outperforms state-of-the-art approaches on a Mars rover
%problem.
%
% ARGUMENT
% - Many works have attempted, most are either approximate
%   or make an assumption of hyper-rectangular piecewise linearity
%   (Littman, Feng, Mausam), single-dimension piece polynomial (Benazera).
%   Other works solve TMDPs using phase-type distributions, Tambe
%   claims a generalization to general continuous state MDPs but does
%   not provide methods for approximating arbitrary (nonlinear) transition
%   functions with phase-type
%   distributions, nor an algorithm or results; they appeal to Feng and
%   Littman as methods to produce this generalization thereby indicating
%   they would make a similar hyper-rectangular restriction that is
%   not made in this work.
%
% FUTURE WORK
% - Combined with forward search techniques like HAO* --- just replace
%   Feng representation and backup with this one
% - Combined with lazy approximation approaches
% - Combine with APRICODD style approaches (need to bound leaves)
% - General stochastic distributions -- requires constrained integration
% - Continuous actions
%
% DEVELOPMENT
% - Define DC-MDP
%   (case representation, dynamics, reward)
%
% RUNNING EXAMPLE
% - Knapsack: give solution first (explain basic XADD)
%
% CONTRIBUTIONS
% - Extend DC-MDPs to causal, stochastic difference equations (restricted)
% - Extend SDP to continuous domains... not piecewise constant,
%   how to do maximization with general functions
% - Introduce novel XADD... decision nodes either booleans or 
%   equality/disequality/inequality
%   * ordering complications: max introduces new nodes, substitutions
%     introduce new nodes, need the Apply reordering trick
% 
% RELATED WORK
% Continuous MDPs; First-order MDPs; Control Theory -- Kalman filter/LQR
% and nonlinear extensions (UKF, EKF)... DC-MDPs with continuous actions,
% extensions representable (just extra variables), only exact solutions
% in restricted cases (Kalman)
%
% FODDs, first-order ADDs
\end{abstract}

% \cite{}, \citeauthor{}~\shortcite{} (to include author names in text)

\section{Introduction}

Many real-world stochastic planning problems involving resources,
time, or spatial configurations naturally use 
continuous variables in their state representation.  
For example, in the \MarsRover\ 
problem~\cite{bresina02}, a rover must manage bounded continuous
resources of battery power and daylight time as it plans scientific
discovery tasks for a set of landmarks on a given day.  

While problems such as the \MarsRover\ are naturally modeled by
discrete and continuous state Markov decision processes (DC-MDPs);
little progress seems to have been made in recent years in developing
\emph{exact} solutions for DC-MDPs with multiple
continuous state variables beyond the subset of DC-MDPs 
which have an optimal 
\emph{hyper-rectangular piecewise linear value
function}~\cite{feng04,li05}.  

% NOTES
% claims about solutions to knapsack?
% case representation up front
% five years
%
%As discussed later in Related Work (Section~\ref{sec:rel_work}), no
%existing algorithm appears to \emph{exactly} solve general DC-MDPs when the
%optimal value function is piecewise and the partitions have general
%linear or nonlinear boundaries defined w.r.t.\ two or more continuous
%variables.  

Yet even simple DC-MDPs may require optimal value
functions that are piecewise functions with non-rectangular boundaries; 
as an illustration, we consider \Knapsack:

\begin{example}[\Knapsack]
\label{ex:knapsack}
We have three continuous state variables: $k \in [0,100]$ indicating
knapsack weight, and two sources of knapsack contents: $x_i \in
[0,100]$ for $i \in \{ 1,2 \}$.  We have two actions $\mathit{move}_i$
for $i \in \{ 1,2 \}$ that can move {\bf all} of a resource from $x_i$ to
the knapsack {\bf if} the knapsack weight remains below
its capacity of $100$.  We get an immediate reward for any weight added 
to the knapsack.

We can formalize the transition and reward for \Knapsack\ 
action $\mathit{move}_i$ $(i \in \{ 1,2 \})$
using difference equations, where $(k,x_1,x_2)$
and $(k',x_1',x_2')$ are respectively the pre- and post-action
state and $R$ is immediate reward:

%; following is the state-update
%equation for action $\mathit{move}_i$ ($i \in \{ 1,2 \}$): 
{\footnotesize
\begin{tabular}{l l}
\hspace{-2mm} $k' = \begin{cases}
k + x_i \leq 100 : & k + x_1 \\
k + x_i > 100 :    & k \\
\end{cases}$ & $R = \begin{cases}
k + x_i \leq 100 : & x_i \\
k + x_i > 100 :    & 0 \\
\end{cases}$\\
\hspace{-2mm} $x_i' = \begin{cases}
k + x_i \leq 100 : & 0 \\
k + x_i > 100 :    & x_i \\
\end{cases}$ & $x_j' = x_j, \; (j \neq i)$
\end{tabular}}
\end{example}

If our objective is to maximize the long-term \emph{value} $V$ (i.e.,
the sum of rewards received over an infinite horizon of actions), then
we can write the optimal value achievable from a given state in \Knapsack\ 
as a function of state variables:
\vspace{-10mm}

{\footnotesize
%\begin{tabular}{l}
%$
\begin{align}
V = \begin{cases}
x_1 + k > 100 \land x_2 + k > 100 : & 0 \\
x_1 + k > 100 \land x_2 + k \leq 100 : & x_2 \\
x_1 + k \leq 100 \land x_2 + k > 100 : & x_1 \\
x_1 + k \leq 100 \land x_2 + k \leq 100 \land x_2 > x_1 : & x_2 \\
x_1 + k \leq 100 \land x_2 + k \leq 100 \land x_2 \leq x_1 : & x_1 \\
x_1 + x_2 + k \leq 100: & \hspace{-7mm} x_1 + x_2 \\
\end{cases} \label{eq:vfun_knapsack}
%$
%\end{tabular}
\end{align}
}
One will see that this encodes the following
rules (in order): (a) if both resources are too large for the
knapsack, 0 reward is obtained, (b) otherwise if only one item can
fit, the reward is for the largest item that fits, (c) otherwise if
both items can fit then reward $x_1 + x_2$ is obtained.  Here we note
that the value function is piecewise linear, but it contains decision
boundaries like $x_1 + x_2 + k$ that are clearly non-rectangular;
rectangular boundaries are restricted to conjunctions of simple
inequalities of a continuous variable and a constant (e.g., $x_1 \leq
5 \land x_2 > 2 \land k \geq 0$). 

What is interesting to note is that although \Knapsack\ is very
simple, no previous algorithm in the DC-MDP has been proposed to
exactly solve it due to the nature of its non-rectangular piecewise
optimal value function.  Of course our focus in this paper is not just
on \Knapsack\ --- researchers have spent decades finding improved
solutions to this particular combinatorial optimization problem ---
but rather on general stochastic sequential optimization in DC-MDPs 
that contain structure similar to \Knapsack, as well as 
highly nonlinear structure beyond \Knapsack.

Before attempting this, it is important to ask: if the solution to
\Knapsack\ is simple and intuitive, why is it beyond the reach of existing
exact DC-MDP solutions?  In response, upon closer examination of the general
DC-MDP machinery needed to solve this problem, it is not immediately
clear how to evaluate the Bellman backup operations typically
used to obtain exact solutions in DC-MDPs
\emph{when} the value and Q-functions may be arbitrary piecewise
functions; this would require the closed-form computation of integrals
of arbitrary piecewise functions and closed-form maximization over
these Q-functions.  For DC-MDPs with multiple continuous variables,
the answers to these questions have 
been addressed for value functions that are rectangular piecewise 
linear and transition functions that are mixtures of delta
functions~\cite{feng04,li05}, but it seems more general closed-form 
solutions have not been readily apparent.

In this paper, we propose novel ideas to workaround some of these
expressiveness limitations of previous approaches and 
significantly generalize the range of DC-MDPs that
can be solved exactly.  To achieve this more general solution, this
paper contributes a number of important advances:
\begin{itemize}
\item We propose to represent the transition function of 
a DC-MDP using conditional stochastic difference equations; in using 
this formalism, we observe that many aspects of the proposed symbolic 
DC-MDP solution become readily apparent.
\item The use of conditional stochastic difference equations
facilitates symbolic regression of the value function via
substitutions.  This is precisely the motivation behind symbolic
dynamic programming (SDP)~\cite{fomdp} used to solve MDPs with
transitions and reward functions defined in first-order logic, except
that in prior SDP work, only piecewise constant functions have been used;
in this work we introduce techniques for working with \emph{arbitrary} 
piecewise symbolic functions.
\item While the \emph{case} representation for the optimal \Knapsack\ 
solution shown in \eqref{eq:vfun_knapsack} is sufficient in theory to
represent the optimal value functions that our DC-MDP solution
produces, this representation is unreasonable to maintain in practice
because the number of case partitions typically grows exponentially on
each receding horizon control step.  For \emph{discrete} factored
MDPs, algebraic decision diagrams (ADDs)~\cite{bahar93add} have been
successfully used in exact algorithms like SPUDD~\cite{spudd} to
maintain compact value representations.  Motivated by this work we
introduce extended ADDs (XADDs) to compactly represent general
piecewise functions and show how to perform efficient operations on
them \emph{including} symbolic maximization.  We also borrow
techniques from~\cite{penberthy94} for constraint-based pruning of
XADDs that can be applied when XADDs meet certain expressiveness
restrictions.
\end{itemize}

Aided by these algorithmic and data structure advances, 
we empirically demonstrate that our SDP approach
with XADDs can exactly solve a variety of DC-MDPs with \emph{general
piecewise linear and nonlinear value functions} for which no previous
analytical solution has been proposed.

\section{Discrete and Continuous State MDPs}

\label{sec:dcmdps}

We first introduce discrete and continuous state Markov decision
processes (DC-MDPs) and then review their finite-horizon solution via
dynamic programming following~\cite{li05}.  

\subsection{Factored Representation}

In a DC-MDP, states will be represented by vectors of variables
$(\vec{b},\vec{x}) = ( b_1,\ldots,b_n,x_{1},\ldots,x_m )$.  We assume
that each state variable $b_i$ ($1 \leq i \leq n$) is
boolean$\,$ s.t. $b_i \in \{ 0,1 \}$ and each $x_j$ ($1 \leq j \leq m$) is 
continuous s.t. $x_j \in [L_j,U_j]$ for $L_j,U_j \in
\mathbb{R}; L_j \leq U_j$.  We also assume a finite set of actions $A
= \{ a_1, \ldots, a_p \}$.

A DC-MDP is defined by the following: (1) a state transition model
$P(\vec{b}',\vec{x}'|\cdots,a)$, which specifies the probability of
the next state $(\vec{b}',\vec{x}')$ conditioned on a subset of the
previous and next state (defined below) and action $a$; (2) a reward
function $R(\vec{b},\vec{x},a)$, which specifies the immediate reward
obtained by taking action $a$ in state $(\vec{b},\vec{x})$; and (3) a
discount factor $\gamma, \; 0 \leq \gamma \leq 1$.\footnote{If time is
explicitly included as one of the continuous state variables, $\gamma
= 1$ is typically used, unless discounting by horizon (different from
the state variable time) is still intended.}  
A policy $\pi$
specifies the action $\pi(\vec{b},\vec{x})$ to take in each state
$(\vec{b},\vec{x})$.  Our goal is to find an optimal sequence of
horizon-dependent policies $\Pi^* = (\pi^{*,1},\ldots,\pi^{*,H})$
that maximizes the expected sum of discounted rewards over a horizon
$h \in H; H \geq 0$:\footnote{$H=\infty$ is allowed if an optimal policy has a
finitely bounded value (guaranteed if $\gamma < 1$); for $H=\infty$, 
the optimal policy is independent of horizon, 
i.e., $\forall h \geq 0, \pi^{*,h} = \pi^{*,h+1}$.}
\begin{align}
V^{\Pi^*}(\vec{x}) & = E_{\pi^*} \left[ \sum_{h=0}^{H} \gamma^h \cdot r^h \Big| \vec{b}_0,\vec{x}_0 \right], \label{eq:vfun_def}
\end{align}
Here $r^h$ is the reward obtained at horizon $h$ following $\Pi^*$ where 
we assume starting state $(\vec{b}_0,\vec{x}_0)$ at $h=0$.
 
DC-MDPs as defined above are naturally factored~\cite{boutilier99dt}
in terms of state variables $(\vec{b},\vec{x})$; as such transition
structure can be exploited in the form of a dynamic Bayes net
(DBN)~\cite{dbn} where the individual conditional probabilities
$P(b_i'|\cdots,a)$ and $P(x_j'|\cdots,a)$ condition on a subset of the
variables in the current and next state.  We disallow \emph{synchronic
arcs} (variables that condition on each other in the same time slice) 
within the binary $\vec{b}$ and continuous variables $\vec{x}$, 
but we allow synchronic arcs from $\vec{b}$ to $\vec{x}$ (note that
these conditions enforce directed graph requirements for the dynamic
Bayes net transition distribution).
%from variables in $\vec{b}$ to each other and to $\vec{x}$.
%variables to condition on binary
%them between  to enforce directed graph
%properties for \emph{synchronic arcs} among binary variables 
%, we assume a total ordering over
%binary and continuous variables and let $\vec{b}_{<i}$
%($\vec{x}_{<j}$) represent all variables lower than $b_i$ ($x_j$) in
%the ordering; furthermore we asssume and that all 
%$\vec{b}$ come before $\vec{x}$.  
Thus, the joint transition model can be specified as
\begin{align}
P(\vec{b}',&\vec{x}'|\cdots,a) = \label{eq:dbn} \\
& \prod_{i=1}^n P(b_i'|\vec{b},\vec{x},a) \prod_{j=1}^m P(x_j'|\vec{b},\vec{b}',\vec{x},a). \nonumber 
\end{align}

As for standard finite discrete factored MDPs, the conditional
probabilities $P(b_i'|\vec{b},\vec{x},a)$ for \emph{binary} variables
$b_i$ ($1 \leq i \leq n$) can be represented by conditional
probability tables (CPTs).  For the \emph{continuous} variables $x_j$
($1 \leq j \leq m$), we represent the continuous probability functions
(CPFs) $P(x_j'|\vec{b},\vec{b'},\vec{x},a)$ with \emph{conditional stochastic
difference equations} (CSDEs).  For the solution provided here, we
only require two properties of these CSDEs: (1) they are
\emph{Markov}, meaning that they can only condition on the previous
state, and (2) they are \emph{causal} meaning that the next state must
be uniquely determined from the previous state (i.e., $x_1' = x_1 +
x_2^2$ is causal whereas $x_1'^2 = x_1^2$ is non-causal because $x_1'
= \pm x_1$).  Otherwise we allow for arbitrary functions in these
causal Markov conditional difference equations as in the following
example:
\vspace{-3mm}

{\footnotesize
\begin{align}
P(x_1' | \vec{b},\vec{b}',\vec{x},a) = \delta\left[ x_1' = 
\begin{cases}
b_1' \land x_2^2 \leq 1 : & \exp(x_1^2 - x_2^2) \\
\neg b_1' \lor  x_2^2 > 1 : & x1 + x_2 \\
\end{cases}
\right] \label{eq:ex_csde}
\end{align}}
Here %the next-state of $x_1'$ is independent of the action $a$;
the use of the Dirac $\delta[\cdot]$ function ensures that this is a
proper probability distribution function that integrates to 1 over $x_1'$
in this case.  In this work, we require all CSDEs in the transition
function for variable $x_i$ to use the $\delta[\cdot]$ as shown in this
example.

It will be obvious that CSDEs in the form of \eqref{eq:ex_csde} are
\emph{conditional difference equations}; they are furthermore \emph{stochastic}
because they can condition on boolean random variables in the same time slice
that are stochastically sampled, e.g., $b_1'$ in
\eqref{eq:ex_csde}.  Of course, these CSDEs are restricted in that
they cannot represent general stochastic noise (e.g., Gaussian noise),
but we note that this representation effectively allows modeling of
continuous variable transitions as a mixture of $\delta$ functions,
which has been used heavily in previous exact DC-MDP
solutions~\cite{feng04,li05,hao09}.  Furthermore, we note that our
representation is more general than~\cite{feng04,li05,hao09} in that
we do not restrict the difference equation to be linear, but rather
allow it to specify \emph{arbitrary} functions (e.g., nonlinear) as
demonstrated in~\eqref{eq:ex_csde}.

We allow 
the reward function $R(\vec{b},\vec{x},a)$
to be \emph{any} arbitrary function of the current state
and action, for example:
\begin{align}
R(\vec{b},\vec{x},a) = \begin{cases}
x_1^2 + x_2^2 \leq 1 : & 1 - x_1^2 - x_2^2  \\
x_1^2 + x_2^2 > 1 : & 0 \\
\end{cases} \label{eq:simple_reward}
\end{align}
or even 
\begin{align}
R(\vec{b},\vec{x},a) & = 10 x_3 x_4 \exp(x_1^2 + \log(x_2)) \label{eq:expr_reward}
\end{align}
While our DC-MDP examples throughout the paper will demonstrate the
full expressiveness of our symbolic dynamic programming approach,
we note that there are computational advantages to be had when
the reward and transition case conditions and 
functions can be restricted, e.g., to
polynomials.  We will return to this issue later.

\subsection{Solution Methods}

\label{sec:soln}

Now we provide a continuous state and action generalization of {\it
value iteration}~\cite{bellman}, which is a dynamic programming
algorithm for constructing optimal policies.  It proceeds by
constructing a series of $h$-stage-to-go value functions
$V^h(\vec{b},\vec{x})$.  Setting $V^0(\vec{b},\vec{x}) = R(\vec{b},\vec{x})$, 
we define the quality of taking action $a$ in state
$(\vec{b},\vec{x})$ and acting so as to obtain $V^{h}(\vec{b},\vec{x})$ 
thereafter as the following:
\vspace{-3mm}

{\footnotesize
\begin{align}
& Q^{h+1}(\vec{b},\vec{x},a) = R_i(\vec{b},\vec{x},a) + \gamma \cdot \label{eq:qfun} \\ 
& \sum_{\vec{b}'} \int_{\vec{x}'} \left( \prod_{i=1}^n P(b_i'|\vec{b},\vec{x},a) \prod_{j=1}^m P(x_j'|\vec{b},\vec{b}',\vec{x},a) \right) V^h(\vec{b}',\vec{x}') d\vec{x}' \nonumber
\end{align}}

Given $Q^h(\vec{b},\vec{x},a)$ for each $a \in A$, we can proceed
to define the $h+1$-stage-to-go value function as follows:
\begin{align}
V^{h+1}(\vec{b},\vec{x}) & = \max_{a \in A} \left\{ Q^{h+1}(\vec{b},\vec{x},a) \right\} \label{eq:vfun}
\end{align}

If the horizon $H$ is finite, then the optimal value function is
obtained by computing $V^H(\vec{b},\vec{x})$ and the optimal
horizon-dependent policy $\pi^{h,*}$ at each stage $h$ can be easily
determined via 
$\pi^{*,h}(\vec{b},\vec{x}) = \argmax_a Q^h(\vec{b},\vec{x},a)$.  
If the horizon 
$H = \infty$ and the optimal policy has finitely bounded value, 
then value iteration can terminate at horizon $h+1$ once 
$V^{h+1} = V^{h}$; then 
$\pi^*(\vec{b},\vec{x}) = \argmax_a Q^{h+1}(\vec{b},\vec{x},a)$.

Of course this is simply the \emph{mathematical} definition.  In the
discrete-only case, we can always compute this in tabular form;
however, how to compute this for DC-MDPs with reward and transition
function as previously defined is the objective of the symbolic
dynamic programming algorithm that we define next.

\section{Symbolic Dynamic Programming}

As it's name suggests, symbolic dynamic programming (SDP)~\cite{fomdp}
is simply the process of performing dynamic programming (in this case
value iteration) via symbolic manipulation.  While SDP as defined
in~\cite{fomdp} was previously only used with piecewise
constant functions, we now generalize the representation to work with
general piecewise functions needed for DC-MDPs in this paper.  

Before we define our solution, however, we must formally define our
case representation and symbolic case operators.

\subsection{Case Representation and Operators}

Throughout this paper, we will assume that all symbolic functions
can be represented in \emph{case} form as follows:
{%\footnotesize 
\begin{align*}
f = 
\begin{cases}
  \phi_1 & f_1 \\ 
  : & : \\ 
  \phi_k & f_k \\ 
\end{cases}
\end{align*}
}
Here the $\phi_i$ are logical formulae defined over the state
$(\vec{b},\vec{x})$ that can include arbitrary logical ($\land,\lor,\neg$)
combinations of (a) boolean variables in $\vec{b}$ and (b) 
inequalities ($\geq,>,\leq,<$), equalities ($=$), or disequalities ($\neq$)
where the left and right operands can be \emph{any} function of one or more 
variables in $\vec{x}$.  
Each $\phi_i$ will be disjoint from the other $\phi_j$ ($j \neq i$); 
however the $\phi_i$ may not exhaustively cover the state space, hence
$f$ may only be a \emph{partial function} and may be undefined for some
state assignments.
%\footnote{In the context of SDP, states whose value
%at horizon $h$ is undefined correspond to states that cannot reach any
%defined state of the reward in horizon $h$.}
The $f_i$ can be \emph{any} functions of the state
variables in $\vec{x}$.  

As concrete examples, consider the transition representation for
\Knapsack\ in Ex.~\ref{ex:knapsack}, the optimal value function for
\Knapsack\ from~\eqref{eq:vfun_knapsack}, or any of
\eqref{eq:ex_csde}, \eqref{eq:simple_reward}, or \eqref{eq:expr_reward}.

\emph{Unary operations} such as scalar multiplication $c\cdot f$ (for
some constant $c \in \mathbb{R}$) or negation $-f$ on case statements
$f$ are straightforward; the unary operation is simply applied to each
$f_i$ ($1 \leq i \leq k$). Intuitively, to perform a \emph{binary
  operation} on two case statements, we simply take the cross-product
of the logical partitions of each case statement and perform the
corresponding operation on the resulting paired partitions.  Letting
each $\phi_i$ and $\psi_j$ denote generic first-order formulae, we can
perform the ``cross-sum'' $\oplus$ of two (unnamed) cases in the
following manner:

{\footnotesize 
\begin{center}
\begin{tabular}{r c c c l}
&
\hspace{-6mm} 
  $\begin{cases}
    \phi_1: & f_1 \\ 
    \phi_2: & f_2 \\ 
  \end{cases}$
$\oplus$
&
\hspace{-4mm}
  $\begin{cases}
    \psi_1: & g_1 \\ 
    \psi_2: & g_2 \\ 
  \end{cases}$
&
\hspace{-2mm} 
$ = $
&
\hspace{-2mm}
  $\begin{cases}
  \phi_1 \wedge \psi_1: & f_1 + g_1 \\ 
  \phi_1 \wedge \psi_2: & f_1 + g_2 \\ 
  \phi_2 \wedge \psi_1: & f_2 + g_1 \\ 
  \phi_2 \wedge \psi_2: & f_2 + g_2 \\ 
  \end{cases}$
\end{tabular}
\end{center}
}
\normalsize

Likewise, we can perform $\ominus$ and $\otimes$ by,
respectively, subtracting or multiplying partition values (as opposed
to adding them) to obtain the result.  Some partitions resulting from
the application of the $\oplus$, $\ominus$, and $\otimes$ operators
may be inconsistent (infeasible); we may simply discard such 
partitions as they are irrelevant to the function value.

For SDP, we'll also need to perform maximization, restriction,
and substitution on case statements.  
\emph{Symbolic maximization} is fairly straightforward
to define:
\vspace{-5mm}

{\footnotesize
\begin{center}
\begin{tabular}{r c c c l}
&
\hspace{-9mm} $\max \Bigg(
  \begin{cases}
    \phi_1: & f_1 \\ 
    \phi_2: & f_2 \\ 
  \end{cases}$
$,$
&
\hspace{-4mm}
  $\begin{cases}
    \psi_1: & g_1 \\ 
    \psi_2: & g_2 \\ 
  \end{cases} \Bigg)$
&
\hspace{-4mm} 
$ = $
&
\hspace{-4mm}
  $\begin{cases}
  \phi_1 \wedge \psi_1 \wedge f_1 > g_1    : & f_1 \\ 
  \phi_1 \wedge \psi_1 \wedge f_1 \leq g_1 : & g_1 \\ 
  \phi_1 \wedge \psi_2 \wedge f_1 > g_2    : & f_1 \\ 
  \phi_1 \wedge \psi_2 \wedge f_1 \leq g_2 : & g_2 \\ 
  \phi_2 \wedge \psi_1 \wedge f_2 > g_1    : & f_2 \\ 
  \phi_2 \wedge \psi_1 \wedge f_2 \leq g_1 : & g_1 \\ 
  \phi_2 \wedge \psi_2 \wedge f_2 > g_2    : & f_2 \\ 
  \phi_2 \wedge \psi_2 \wedge f_2 \leq g_2 : & g_2 \\ 
  \end{cases}$
\end{tabular}
\end{center}
}
One can verify that the resulting case statement is still
within the case language defined previously.  At first
glance this may seem like a cheat and little is gained
by this symbolic sleight of hand.  As it turns out, simply
having a well-defined case partition representation of the
maximization will facilitate the regression step required
for SDP.  Furthermore, the 
XADD that we introduce later will be able to exploit the 
internal decision structure of this
maximization to represent it much more compactly.

The next operation of \emph{restriction} is fairly simple: in this
operation, we want to restrict a function $f$ to apply only in cases
that satisfy some formula $\phi$, which we write as $f|_{\phi}$.  
This can be done by simply appending $\phi$ to each case partition
as follows:
{\footnotesize
\begin{center}
\begin{tabular}{r c c l}
&
\hspace{-6mm} 
  $f = \begin{cases}
    \phi_1: & f_1 \\ 
    : & : \\ 
    \phi_k: & f_k \\ 
  \end{cases}$
&

&
\hspace{-2mm}
  $f|_{\phi} = \begin{cases}
    \phi_1 \land \phi : & f_1 \\ 
    : & : \\ 
    \phi_k \land \phi : & f_k \\ 
  \end{cases}$
\end{tabular}
\end{center}
}
Clearly $f|_{\phi}$ only applies when $\phi$ holds and is
undefined otherwise, hence $f|_{\phi}$ is a partial function
unless $\phi \equiv \top$.

The final operation that we need to define for case
statements is substitution.  \emph{Symbolic substitution} simply takes
a set $\sigma$ of variables and their substitutions, e.g., 
$\sigma = \{ x_1' = x_1 + x_2, x_2' = x_1^2 \exp(x_2) \}$ where
the LHS of the $=$ represents the substitution variable and the
RHS of the $=$
the expression that should be substituted in its place.  No variable
occurring in any RHS expression of $\sigma$ can also occur in any 
LHS expression of $\sigma$.
We write the substitution of a non-case function $f_i$ with $\sigma$ 
as $f_i\sigma$; as an example, for the $\sigma$ defined previously and 
$f_i = x_1' + x_2'$ and $f_i\sigma = x_1 + x_2 + x_1^2 \exp(x_2)$ as
would be expected.  We can also substitute into case partitions $\phi_j$
by applying $\sigma$ to its LHS and RHS operands; as an example, if
$\phi_j \equiv x_1' \leq \exp(x_2')$ then 
$\phi_j \sigma \equiv x_1 + x_2 \leq \exp(x_1^2 \exp(x_2))$.
Having now defined substitution of $\sigma$ for non-case functions $f_i$ and case
partitions $\phi_j$ we can define it for case statements in general:

{\footnotesize
\begin{center}
\begin{tabular}{r c c l}
&
\hspace{-6mm} 
  $f = \begin{cases}
    \phi_1: & f_1 \\ 
    : & : \\ 
    \phi_k: & f_k \\ 
  \end{cases}$
&

&
\hspace{-2mm}
  $f\sigma = \begin{cases}
    \phi_1\sigma: & f_1\sigma \\ 
    : & : \\ 
    \phi_k\sigma: & f_k\sigma \\ 
  \end{cases}$
\end{tabular}
\end{center}
}
\normalsize

One useful property of substitution is that
if $f$ has mutually exclusive partitions $\phi_i$ ($1 \leq i \leq k$)
then $f\sigma$ must also have mutually exclusive partitions ---
this follows from the logical consequence that 
if $\phi_1 \land \phi_2 \vdash \bot$
then $\phi_1\sigma \land \phi_2\sigma \vdash \bot$.
We will exploit this property next in SDP for DC-MDPs.



\subsection{Symbolic Dynamic Programming (SDP)}

% \ref{sec:dcmdps} \ref{sec:soln} \ref{eq:qfun} \ref{eq:vfun} 

In the SDP solution for DC-MDPs, our objective will be to take
a DC-MDP as defined in Section~\ref{sec:dcmdps}, apply value
iteration as defined in Section~\ref{sec:soln}, and produce
the final value optimal function $V^h$ at horizon $h$ in the form
of a case statement.

As a first step, we note that $V^0(\vec{b},\vec{x}) = R(\vec{b},\vec{x})$
and $R(\vec{b},\vec{x})$ as described in Section~\ref{sec:dcmdps}
has exactly the form of a case statement.  So trivially, we have
satisfied our objective for $h=0$.  

Next, $h > 0$ requires the application of SDP.  
Fortunately, given our previously defined
operations, SDP is straightforward and can be divided into four 
steps: 
\begin{enumerate}
\item {\it Prime the Value Function}: Since $V^{h}$ will become
the ``next state'' in value iteration, we setup a substitution
$\sigma = \{ b_1 = b_1', \ldots, b_n = b_n', x_1 = x_1', \ldots, x_m = x_m' \}$
and obtain $V'^{h} = V^{h}\sigma$.
\item {\it Continuous Regression}: 
Now that we have our primed value function $V'^{h}$ in case
statement format defined over the next state variables $(\vec{b}',\vec{x}')$
we first evaluate the integral marginalization 
$\int_{\vec{x}'}$ over the continuous variables in~\eqref{eq:qfun}.
What follows is one of the \emph{key novel insights of SDP} in the context of
DC-MDPs --- the integration 
$\int_{x_j'} \delta[x_j' = g(\vec{x})] V'^{h} dx_j'$ 
simply \emph{triggers the substitution} $\sigma = \{ x_j' = g(\vec{x}) \}$
on $V'^{h}$, that is
\begin{align}
\int_{x_j'} \delta[x_j' = g(\vec{x})] V'^{h} dx_j' \; = \; V'^{h} \{x_j' = g(\vec{x}) \} . \label{eq:one_int}
\end{align}
For each $x_j$ ($1 \leq j \leq m$), we have
disallowed synchronic arcs between variables in $\vec{x}'$ 
in the transition DBN, so we can integrate
out each variable $x_j'$ independently, thus we can perform the
operation in~\eqref{eq:one_int} repeatedly in sequence \emph{for each}
$x_j'$ for every action $a$.  The only
additional complication is that the form of 
$P(x_j'|\vec{b},\vec{x},a)$ is a \emph{conditional} difference
equation, c.f.~\eqref{eq:ex_csde}, and represented generically
as follows:
\begin{align*}
   P(x_j'|\vec{b},\vec{x},a) = \delta\left[ x_j' = \begin{cases}
    \phi_1: & f_1 \\ 
    : & : \\ 
    \phi_k: & f_k \\ 
  \end{cases} \right]
\end{align*}
Hence to perform~\eqref{eq:one_int} on this more general
representation, we obtain that $\int_{x_j'} P(x_j'|\vec{b},\vec{x},a) V'^{h} dx_j'$
\begin{align*}
    = \begin{cases}
    \phi_1: & V'^{h} \{ x_j' = f_1 \} \\ 
    : & : \\ 
    \phi_k: & V'^{h} \{ x_j' = f_k \}  \\ 
  \end{cases}
\end{align*}
Here we note that because $V'^{h}$ is already a case statement, we simply
replace the single partition $\phi_i$ with the multiple partitions
of $V \{ x_j' = f_i \}|_{\phi_i}$.  

To complete the continuous regression, if we initialize 
$\dot{Q}_a^{h+1} := V'^{h}$ for each action $a \in A$, and repeat
this above integrals for all $x_j'$, updating $\dot{Q}_a^{h+1}$ each time,
then after elimination of all $x_j'$, we will have 
the partial regression of $V'^{h}$ through the continuous variables for
each action denoted by $\tilde{Q}_a^{h+1}$.
\item {\it Discrete Regression}: Now that we have our partial
regression $\tilde{Q}_a^{h+1}$ for each action $a$, we proceed
to derive the full backup $Q_a^{h+1}$ from $\tilde{Q}_a^{h+1}$
by evaluating the discrete 
marginalization $\sum_{\vec{b}'}$ in~\eqref{eq:qfun}.
Because we previously disallowed synchronic arcs
between the variables in $\vec{b}'$ 
in the transition DBN, we can sum out each variable $b_i'$ ($1 \leq i \leq n$) 
independently.  Hence, initializing
$Q_a^{h+1} := \tilde{Q}_a^{h+1}$
we perform the discrete regression by applying the following iterative
process \emph{for each} $b_i$ in any order
for each action $a$:
\begin{align}
Q_a^{h+1} := & \left[ Q_a^{h+1} \otimes P(b_i|\vec{b},\vec{x},a) \right]|_{b_i} \nonumber \\
 & \oplus \left[ Q_a^{h+1} \otimes P(b_i|\vec{b},\vec{x},a) \right]|_{\neg b_i}.
\end{align}
Note that both $Q_a^{h+1}$ and $P(b_i|\vec{b},\vec{x},a)$ can be represented
as case statements (discrete CPTs \emph{are} case statements), 
and each operation produces a case statement.
Thus, once this process is complete, we have marginalized over
all $\vec{b}'$ and $Q_a^{h+1}$ is the symbolic representation
of the intended Q-function.
\item {\it Maximization}: Now that we have $Q_a^{h+1}$ in
case format for each action $a \in \{a_1,\ldots,a_p\}$, obtaining
$V^{h+1}$ in case format as defined in~\eqref{eq:vfun} requires
sequentially applying
\emph{symbolic maximization} as defined previously:
\begin{align*}
V^{h+1} & = 
\max(Q_{a_1}^{h+1},\max(\ldots,\max(Q_{a_{p-1}}^{h+1},Q_{a_p}^{h+1})))
\end{align*}
\end{enumerate}
By induction, because $V^0$ is a case statement and applying
SDP to $V^h$ in case statement form produces $V^{h+1}$ in case
statement form, we have achieved our intended
objective with SDP.  On the issue of correctness,
we note that each operation above simply implements one of the
dynamic programming operations in \eqref{eq:qfun} or \eqref{eq:vfun}, 
so correctness simply follows from verifying (a) that each case
operation produces the correct result and that (b) each case operation
is applied in the correct sequence as defined in \eqref{eq:qfun} or 
\eqref{eq:vfun}.

%To make this concrete, we provide an example of SDP for \Knapsack.

On a final note, observe that SDP holds for \emph{any} symbolic
case statements; we have not restricted ourselves to rectangular
piecewise functions, piecewise linear functions, or even piecewise
polynomial functions.  As the SDP solution is purely symbolic,
SDP applies to \emph{any} DC-MDPs using bounded symbolic function 
that can be written in case format!  Of course, that is the theory,
next we meet practice.

\section{Extended ADDs (XADDs)}

In practice, it can be prohibitively expensive to maintain
a case statement representation of a value function with explicit
partitions.  Motivated by the SPUDD~\cite{spudd} algorithm which
maintains compact value function representations for finite discrete
factored MDPs using algebraic decision diagrams (ADDs)~\cite{bahar93add},
we extend this formalism to handle continuous variables in a data
structure we refer to as the XADD.  An example XADD for the optimal
\Knapsack value function from~\eqref{eq:vfun_knapsack} is provided
in Figure~\ref{fig:knapsack_vfun}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{knapsack2.pdf}
\end{center}
\vspace{-3mm}
\caption{\footnotesize The optimal value function for \Knapsack\ 
as a decision diagram: 
the \emph{true} branch is solid, the \emph{false}
branch is dashed.} \label{fig:knapsack_vfun}
\vspace{-3mm}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In brief we note that an XADD is like an ADD except that (a) the decision
nodes can have arbitrary inequalities, equalities, or disequalities (one
per node) and (b) the leaf nodes can represent arbitrary functions.
The decision nodes still have a fixed order from root to leaf
and the standard ADD
operations to build a canonical ADD (\textsc{Reduce}) and 
to perform a binary operation on two ADDs (\textsc{Apply}) 
still applies in the case of XADDs.

While exact solutions using symbolic dynamic
programming are possible in principle for arbitrary symbolic CSDE transition
and reward functions, we note that it is much more difficult to
devise a canonical and compact form for representations 
such as~\eqref{eq:expr_reward}
in comparison to~\eqref{eq:simple_reward}.
Hence while we have used general examples throughout the paper
to demonstrate the expressiveness of our approach, we will restrict
XADDs to use \emph{polynomial} functions only.  We note the main advantage
of this for the XADD is that we can put the leaf and decision nodes
in a \emph{unique, canonical} form, which allows us to minimize 
redundancy in the XADD representation of a case statement.

It is fairly straightforward for XADDs to support all case operations
required for SDP.  Standard operations like unary multiplication,
negation, $\oplus$, and $\otimes$ are implemented exactly as they
are for ADDs.  The fact that the decision nodes have internal structure
is irrelevant, although this means that certain paths in the XADD
may be inconsistent or infeasible (due to parent decisions).  To
remedy this, when the XADD has only linear decision nodes and
linear leaf functions, we can use the feasibility checkers of
a linear programming solver (e.g., as also done in~\cite{penberthy94}) 
to prune unreachable nodes in the XADD; later we show results demonstrating
impressive reductions in XADD size using this style of pruning.

The only two XADD operations that pose difficulty are substitution
and maximization.  In principle substitution is simple, the only
caveat is that substitutions change the decision nodes and hence
decision nodes may get out of order.  We can use the 
recursive application of ADD binary operations $\otimes$ and $\oplus$ 
as given in Algorithm~\ref{fig:correct} to correctly reorder the
nodes in an XADD $F$ after substitution.  A related reordering
issue occurs during XADD maximization; because XADD maximization
can introduce new decision nodes (which occurs at the leaf when
two leaf functions are compared) and these decision nodes may
be out of order w.r.t.\ the diagram, reordering as defined
in Algorithm~\ref{fig:correct} must also be applied after
maximization.  

On a final note, we mention that an implementation of case statements
without any attempt to merge and simplify cases often cannot get
past the first or second iteration of SDP; as our results show next,
XADDs allow SDP to perform quite well in practice.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\incmargin{1em}
%\linesnumbered
\begin{algorithm}[t!]
\SetKwFunction{getCanonicalNode}{{\sc GetCanonicalNode}}
\SetKwFunction{reduce}{{\sc Reorder}}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}

\Input{$F$ (root node for possibly unordered XADD)}
\Output{$F_r$ (root node for an ordered XADD)}
\BlankLine
\Begin{
   //if terminal node, return canonical terminal node\\
   \If{F is terminal node}
   {
   \Return{canonical terminal node for polynomial of $F$}\;
   }
   //nodes have a $\mathit{true}$ \& $\mathit{false}$ branch and $\mathit{var}$ id\\
   \If{$F \rightarrow F_r$ is not in Cache}
   {
    $F_{\mathit{true}}$ = \reduce{$F_{\mathit{true}}$} $\otimes \; \mathbb{I}[F_\mathit{var}]$ \;
    $F_{\mathit{false}}$ = \reduce{$F_{\mathit{false}}$} $\otimes \; \mathbb{I}[\neg F_\mathit{var}]$\;
    $F_r = F_{\mathit{true}} \oplus F_{\mathit{false}}$\;
    insert $F \rightarrow F_r$ in Cache\;
   } 
   \Return{$F_r$}\;
}
\caption{{\sc Reorder}(F)  \label{fig:correct}}
\end{algorithm}
\decmargin{1em}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\section{Empirical Results}

We implemented two versions of our proposed algorithm: one that does
not prune nodes of the XADD and another that uses a linear programming
solver to prune unreachable nodes. We tested our algorithms on two
versions of Mars Rover domain (adapted from ~\cite{bresina02}), named
\MarsRoverL ~and \MarsRoverNL. In this domain a rover is supposed to
approach a target point and take spectral images of the area. All
actions consume time and energy.  There are also some domain
constraints, e.g., some pictures can be taken only in a certain day
time window and can require different levels of energy to be
performed.


\paragraph{\MarsRoverL ~Domain.} This version has two continuous variables,
\emph{time} and \emph{energy}, with a varying number of boolean
variables (target points, rover locations and taken picture flags).
There are actions for taking different pictures and moving from one
location to another, which are conditioned by linear expressions over
the time and energy variables. The reward is also a function of time
and energy, e.g., the reward for action $\mathit{takepicture}_i$
(take picture of target i) is given by:

{\scriptsize
%\begin{tabular}{l}
%$
\begin{align}
\nonumber
energy > 3+0.0002*time \land atp1 \land 3600<time<50400: & R=110
%$
%\end{tabular}
\end{align}
}
\normalsize
that can be interpreted as: \emph{the rover requires a greater reserve
  of energy before executing an action later in the day}.

\paragraph{\MarsRoverNL ~Domain.} This version has two different continuous
variables, the geographic coordinates \emph{x} and \emph{y}, and
booleans variables related to points and taken picture flags. The
actions are the same from \MarsRoverL ~domain but conditioned by non-linear
expressions over the continuous $x$ and $y$ variables. The reward is
also a function of $x$ and $y$, e.g., the reward for action
$\mathit{takepicture}_i$ is given by:

{\footnotesize
%\begin{tabular}{l}
%$
\begin{align}
\nonumber
R = \begin{cases}
x^2 + y^2 < 4 \land haspicture_i : & 0 \\
x^2 + y^2 < 4 \land \neg haspicture_i : & 4 - x^2 - y^2 \\
x^2 + y^2 \geq 4 : & 0
\end{cases} 
%$
%\end{tabular}
\end{align}
}

which can be interpreted as: \emph{the rover's reward increases as it
  is nearer to the picture point, but some reward is given only within
  a certain maximum radius of this point, i.e., in this example, the
  radius is 4 and the radius condition is centered on the target
  location (0,0).}


For the above domains, we have run experiments to test our algorithms
in terms of time and space cost varying horizon and problem size.




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.24\textwidth]{FIGURES/SpaceVsPictureLinear.pdf}
\includegraphics[width=0.24\textwidth]{FIGURES/TimeVsPicturesLinear.pdf}
}
\vspace{-3mm}
\caption{\footnotesize Space and time for different problem sizes of
  the \MarsRoverL ~Domain.}
\label{fig:lin}
\vspace{-3mm}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In Figure \ref{fig:lin} we show, for the \MarsRoverL ~domain, how the
number of nodes of the XADD (representing the value function) varies
for each iteration (horizon) and for different problem sizes (given by
the number of pictures). It is impressive how, for such a domain with
non-rectangular piecewise bondaries, we can found exact optimal
solutions for problems up to horizon 8 by applying an exact SDP
approach. Note that the number of nodes increases considerably at
each iteration; however, when a solution is found within a limit of
400 seconds, it takes less than 100 seconds to finish.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.24\textwidth]{FIGURES/SpaceVsPictures.pdf}
\includegraphics[width=0.24\textwidth]{FIGURES/TimeVsPictures.pdf}
}
\vspace{-3mm}
\caption{\footnotesize Space and time for different problem sizes of
  the \MarsRoverNL ~Domain.} 
\label{fig:nonlin}
\vspace{-3mm}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Figure \ref{fig:nonlin} presents the same analysis but for the
\MarsRoverNL ~domain, which is a type of non-linear CD-MDP problem
that has never been exactly solved before. For such a complex problem
we could exactly solve until horizon 3 and problem size up to 4
pictures, showing that our extension of symbolic dynamic programming
(SDP) techniques can provide exact optimal solutions for a vastly
expanded class of DC-MDPs, including problems containing nonlinear
difference equations.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.24\textwidth]{FIGURES/SpaceVsHorizonRoverLinear3.pdf}
\includegraphics[width=0.24\textwidth]{FIGURES/TimeVsHorizonRoverLinear3.pdf}
}
\vspace{-3mm}
\caption{\footnotesize Space and time for different iterations (horizons) of
  the \MarsRoverL ~Domain with 3 image target points.} 
\label{fig:lin3}
\vspace{-3mm}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


Figure \ref{fig:lin3} shows that, for the \MarsRoverL ~problem with
three image target points, we can note the advantage of prunning the
XADD for unreachable nodes; demonstrating an impressive reduction in
XADD size. Note that without using the prunning method the time and
number of nodes grow exponentially, while applying a feasibility
checker of a linear programming solver, we can solve a problem with 3
pictures up to horizon 8. 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[width=0.50\textwidth]{FIGURES/Knapsack.pdf}
\includegraphics[width=0.50\textwidth]{FIGURES/MarsRoverLinear2.pdf}
}
\subfigure{
\includegraphics[width=0.50\textwidth]{FIGURES/MarsNonlinear.pdf}
}
\vspace{-3mm}
\caption{\footnotesize Exact optimal value function for domains with
  non-rectangular piecewise boundaries.}
\label{fig:plot3D}
\vspace{-3mm}
\end{figure*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

We also show in Figure \ref{fig:plot3D} the exact optimal value
function for the three domains: \Knapsack , \MarsRoverL ~and
\MarsRoverNL ~domains varying the values of the corresponding
continuous variables. We can notice the boundaries of these 3D plots
are clearly non-rectangular. These is related to the fact that we have
defined CD-MDP domains involving more general expressions in the
reward and transition functions. In particular, the 3D plot for the
\MarsRoverNL ~domain has a cone shape due to the radius conditions of
its reward and transition functions definition. Up to our knowledge,
this is the first exact analytical solution for a nonlinear CD-MDP
problem.

\section{Related Work}

\label{sec:rel_work}

The most relevant vein of Related work is that of~\cite{feng04}
and~\cite{li05} which can perform exact dynamic programming on
DC-MDPs with rectangular piecewise linear reward and transition functions
that are delta functions.  While SDP can solve these same problems,
it removes both the rectangularity and piecewise restrictions, while
retaining exactness.  
Heuristic search approaches with formal guarantees 
like HAO*~\cite{hao09} are an attractive future extension of SDP;
in fact HAO* currently uses the method of~\cite{feng04}, which could
be directly replaced with SDP.  While~\cite{penberthy94} has considered
general piecewise functions with linear boundaries (and in fact,
we borrow our linear pruning approach from this paper), this work
only applied to fully deterministic settings, not DC-MDPs.

Other work has analyzed limited DC-MDPS having only one continuous
variable.  Clearly rectangular restrictions are meaningless with
only one continuous variable, so it is not surprising that more
progress has been made in this restricted setting.  One continuous
variable can be useful for optimal solutions to time-dependent MDPs 
(TMDPs)~\cite{boyan01}.  Or phase transitions can be used to 
arbitrarily approximate one-dimensional continuous distributions
leading to a bounded approximation approach for arbitrary single continuous
variable DC-MDPs.  While this work can not handle arbitrary stochastic
noise in its continuous distribution, it does exactly solve DC-MDPs
with multiple continuous dimensions.

Finally, there are a number of general DC-MDP approximation
approaches that use approximate linear programming~\cite{kveton06}
or sampling in a reinforcement learning style approach~\cite{munos02}.
In general, while approximation methods are quite promising in
practice for DC-MDPS, the objective of this paper was to push
the boundaries of exact solutions; however, in some sense, 
we believe that more expressive exact solutions may also inform
better approximations, e.g., by allowing the use of data structures
with non-rectangular piecewise partitions that allow higher fidelity
approximations.

\section{Conclusions}

In this paper, we introduced a conditional stochastic difference
equation model for the transition function in DC-MDPs.  This
representation facilitated the use of symbolic dynamic programming
techniques to generate exact solutions to DC-MDPs with arbitrary
reward functions and expressive nonlinear transition functions
that far exceeds the exact solutions possible with existing DC-MDP
solvers.  In an effort to make SDP practical, we also introduced
the novel XADD data structure for representing arbitrary piecewise
symbolic value functions and we addressed the complications that
SDP induces for XADDs, such as the need for reordering the decision
nodes after some operations.  All of these are substantial contributions
that have contributed to a new level of expressiveness for DC-MDPS
that can be exactly solved.

There are a number of avenues for future research.  First off, it is
important examine what generalizations of the transition function used
in this work would still permit closed-form exact solutions.  In terms
of better scalability, one avenue would explore the use of initial
state focused heuristic search-based value iteration like
HAO*~\cite{hao09} that can be readily adapted to use SDP.  Another
avenue of research would be to adapt the lazy approximation approach
of~\cite{li05} to approximate DC-MDP value functions as piecewise
linear XADDs with linear boundaries that may allow for better
approximations than current representations that rely on rectangular
piecewise functions.  Along the same lines, ideas from
APRICODD~\cite{apricodd} for bounded approximation of discrete ADD
value functions by merging leaves could be generalized to XADDs.
Altogether the advances made by this work open up a number of
potential novel research paths that we believe may help make
rapid progress in the field of decision-theoretic planning
with discrete and continuous state.

%\item Continuous actions

\bibliography{cont_mdp}
\bibliographystyle{plain}

\end{document} 
