\documentclass[twoside,11pt]{article}
\usepackage{jair, theapa, rawfonts}
\usepackage{amsmath,amssymb,amsthm}
\usepackage[vlined,algoruled,titlenumbered,noend]{algorithm2e}
\usepackage{epsfig,subfigure}

\def\argmax{\operatornamewithlimits{arg\max}}
\newcommand{\casemax}{\mathrm{casemax}}
\newcommand{\MarsRover}{\textsc{Mars Rover}}
\newcommand{\casemin}{\mathrm{casemin}}
\newcommand{\UB}{\mathit{UB}}
\newcommand{\LB}{\mathit{LB}}
\newcommand{\IND}{\mathit{Ind}}
\newcommand{\CONS}{\mathit{Cons}}
\newcommand{\Root}{\mathit{Root}}
\newcommand{\Max}{\mathit{Max}}
\newcommand{\sq}{\hspace{-1mm}}
\newcommand{\sqm}{\hspace{-2mm}}
\newcommand{\true}{\mathit{true}}
\newcommand{\false}{\mathit{false}}
\newcommand{\Knapsack}{\textsc{Knapsack}}
\newcommand{\MarsRoverL}{\textsc{Mars Rover Linear}}
\newcommand{\MarsRoverNL}{\textsc{Mars Rover Nonlinear}}
\newcommand{\InventoryControl}{\textsc{Inventory Control}}
\newcommand{\WaterReservoir}{\textsc{Reservoir Management}}
\newcommand{\MultiWaterReservoir}{\textsc{Multi-Reservoir}}
\newtheorem*{example*}{Example}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}{Corollary}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{example}[lemma]{Example}

\newenvironment{mydef}[1][Definition]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}}

%\jairheading{1}{1993}{1-15}{6/91}{9/91}
\ShortHeadings{Exact Symbolic Solutions to MDPs}
{Zamani, Sanner}
%\firstpageno{25}


\begin{document}

\title{Exact Symbolic Dynamic Programming for Continuous State-Action MDPs}

\author{\name Zahra Zamani \email zahra.zamani@anu.edu.au \\
       \name Scott Sanner \email ssanner@nicta.com.au \\
       \addr The Australian National University and NICTA,\\
       Canberra, ACT 0200 Australia
%       \AND
%       \name Mark D. Johnston \email johnston@stsci.edu \\
%       \addr Space Telescope Science Institute,
%       3700 San Martin Drive,\\
%       Baltimore, MD 21218 USA
%       \AND
%       \name Philip Laird \email laird@ptolemy.arc.nasa.gov \\
%       \addr NASA Ames Research Center,
%       AI Research Branch, Mail Stop: 269-2,\\
%       Moffett Field, CA  94035 USA
}

\maketitle


\begin{abstract}
Real-world decision theoretic planning problems can naturally be modeled using continuous states and actions. There has been limited work on \emph{exact} planning with Discrete and Continuous Markov Decision Processes (DC-MDPs) with hyper-rectangular piecewise linear value functions. Here we use this framework to find the optimal solution for continuous states with any arbitrary value function. Also we further extend this framework to account for continuous actions as well as continuous states. 
We propose a Symbolic Dynamic Programming (SDP) solution for multi-variate states and actions. For discrete actions our solution applies to a wide range of problems, while for continuous actions we find solutions to problems with piecewise linear or quadratic rewards and dynamics. 
Our symbolic approach defines all operations required to perform value-iteration such as continuous maximization and integration. 
We also introduce the compact representation of XADDs - a continuous variable extension of Algebraic decision diagrams (ADDs) - defining its properties and algorithms required in the SDP algorithms. 
We demonstrate results for both the discrete and continuous action case showing the \emph{first optimal automated solution} on various problem domains. 
\end{abstract}

\section{Introduction}
\label{Introduction}
Many stochastic planning problems in the real-world involving resources, time, or spatial configurations naturally use continuous variables in their state representation.  For example, in the \MarsRover\  problem \cite{bresina02}, a rover must manage bounded continuous resources of battery power and daylight time as it plans scientific discovery tasks for a set of landmarks on a given day.  
A more sophisticated model will also allow continuous variables in the action space. For example, in a different setting of the \MarsRover\ problem, a rover must navigate ( move continuously) within a continuous spatial environment and carry out assigned scientific discovery tasks. 
%Are other examples required here? 
Other examples include \InventoryControl\ problems \cite{Mahootchi2009} for continuous resources such as petroleum products, a business must decide what quantity of each item to order subject to uncertain demand, (joint) capacity constraints, and reordering costs; and in \WaterReservoir\ problems \cite{reservoir}, a utility must manage continuous reservoir water levels in continuous time to avoid underflow while maximizing electricity generation revenue.

In the recent years little progress seems to have been made in developing \emph{exact} solutions for DC-MDPs with multiple continuous state variables beyond the subset of DC-MDPs  which have an optimal \emph{hyper-rectangular piecewise linear value function} \cite{feng04,li05}. Previous work on \emph{exact} solutions to multivariate continuous state \emph{and} action settings have also been limited to the control theory literature for the case of linear-quadratic Gaussian (LQG) control \cite{lqgc}. 
%i.e., minimizing a quadratic cost function subject to linear dynamics with Gaussian noise in a partially observed setting.  
However, the transition dynamics and reward (or cost) for such problems cannot be piecewise --- a crucial limitation preventing the application of such solutions to many planning and operations research problems. 

In this paper, we provide an exact \emph{symbolic dynamic programming} (SDP) solution to continuous state and action Markov decision processes under two different settings. In the first setting we consider continuous state variables with a discrete action set. This allows us to represent any \emph{arbitrary} piecewise symbolic transition function and obtain optimal value functions that are piecewise functions with non-rectangular boundaries. In the second setting we consider continuous states and actions and provide the solution to a useful subset of problems with \emph{piecewise} linear dynamics,
and \emph{piecewise} linear (or restricted \emph{piecewise} quadratic) reward. Motivating examples are provided below to show the benefits and technical details of our approach throughout the paper.

We first consider a continuous state and discrete action setting in the following problem: 
%knapsack + mars_rover OR give a unified example of MARS ROVER? 
%%%%%%
\begin{example*}[\textsc{Continuous State} \MarsRover  (\textsc{CSMR}) ]
In a general \MarsRover\ domain, a rover is supposed to approach one or more target points and take images of these points. Actions may consume time and energy.  There are also some domain constraints, e.g., some pictures can be taken only in a certain time window and can require different levels of energy to be performed. 
\end{example*}

This problem has two continuous variables --- geographic coordinates $(x,y)$ --- and $k$ boolean variables $b_i$ for each picture point $i$ indicating whether the
rover \emph{has already taken} a picture of point $i$.  There is a
single $\mathit{move}$ action in this domain --- it simply reduces the
distance from the rover to a specific point by $\frac{1}{3}$ of the 
current distance.  The intent of this action is to represent the fact that a rover may move progressively more slowly as it approaches a target position in order to reach the
position with high accuracy. 
There are another $k$ actions $\mathit{take\-pic}_i$ that take a picture at point $i$, which are conditioned on \emph{nonlinear} expressions over the continuous
$x$ and $y$ variables. The formalized transition function for this action is defined as the following:
%\vspace{-3mm}
{%\footnotesize
\begin{align}
\mathit{take\-pic '}_i & = 
\begin{cases}
x^2 + y^2 < 4 : & 1 \\
x^2 + y^2 \geq 4 \land  \mathit{take\-pic}_i : & 1 \\
x^2 + y^2 \geq 4 \land  \neg \mathit{take\-pic}_i : & 0 \\
\end{cases} \label{trans:nonlin} 
\end{align}
}
%\vspace{-3mm} 
The reward is also a function of $x$ and $y$, the reward for action $\mathit{take\-pic}_i$ is given by:
%\vspace{-3mm}
{%\footnotesize
\begin{align}
R_{\mathit{take\-pic}_i}(x,y,b_i) & = 
\begin{cases}
x^2 + y^2 < 4 \land b_i : & 0 \\
x^2 + y^2 < 4 \land \neg b_i : & 4 - x^2 - y^2 \\
x^2 + y^2 \geq 4 : & 0
\end{cases} \label{rew:nonlin} 
\end{align}
}
%\vspace{-3mm}
This indicates that if the rover has not already taken a picture of
point $i$ and the rover is within a radius of 2 from the picture point
$(0,0)$, then the rover receives a reward that is quadratically
proportional to the distance from the picture point.  Hence for
various points, the rover has to trade-off whether to take each picture
at its current position, or to get a larger reward by
first moving and potentially getting closer before taking the picture.

Consider an extended version of the problem above which also includes continuous actions: 
%%%%%%%
\begin{example*}[\textsc{Continuous State and Action}  \MarsRover  (\textsc{CSAMR}) ]
A Mars Rover state consists of its continuous position $x$ along a given route.  In a given time step, the rover may move a continuous distance $a \in [-10,10]$.  The rover receives its greatest reward for taking a picture at $x=0$, which quadratically decreases to zero at the boundaries of the range $x \in [-2,2]$.  The rover will
automatically take a picture when it starts a time step within the range $x \in [-2,2]$ and it only receives this reward once.
\end{example*}

Using boolean variable $b \in \{0,1\}$ to indicate if the picture has
already been taken ($b=1$), $x'$ and $b'$ to denote 
post-action state, and $R$ to denote reward, we 
express the \MarsRover\ CSA-MDP using piecewise dynamics and reward:
\begin{align} 
\hspace{-2.8mm} P(b'\sq=\sq1|x,b) & = 
\begin{cases}
b \lor (x \geq -2 \land x \leq 2): & \sqm 1.0\\
\neg b \land (x < -2 \lor x > 2):  & \sqm 0.0
\end{cases} \label{eq:mr_discrete_trans} \\
\hspace{-2.8mm} P(x'|x,a) & = \delta \left( x' - \begin{cases}
a \geq -10 \land a \leq 10 : & \hspace{-2mm} x + a \\
a < -10 \lor a > 10 : & \hspace{-2mm} x
\end{cases}
\right) \label{eq:mr_cont_trans} \\
\hspace{-2.8mm} R(x,b) & = \begin{cases}
\neg b \land x \geq -2 \land x \leq 2 : & 4 - x^2 \\
b \lor x < -2 \lor x > 2 : & 0
\end{cases} \label{eq:mr_reward}
\end{align}

Then there are two natural questions that we want to ask for both these problem domains:
\begin{enumerate}
\item[(a)] What is the optimal form of value that can be 
obtained from any state over a fixed time horizon?
\item[(b)] What is the corresponding closed-form optimal policy?
\end{enumerate}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
%\centering
\begin{minipage}[b]{0.47\linewidth}
\includegraphics[width=0.8\textwidth]{Figures1/camdp/v1_mr.pdf}\\
\vspace{-2mm}
\includegraphics[width=0.8\textwidth]{Figures1/camdp/v2_mr.pdf}\\
\vspace{-2mm}
\includegraphics[width=0.8\textwidth]{Figures1/camdp/v3_mr.pdf}
\vspace{-3mm}

%\parbox{2.8in}{
\caption{\footnotesize Optimal sum of rewards (value) 
$V^t(x)$ for $b = 0 \, 
(\false)$ for time horizons (i.e., decision stages remaining) $t=0$,
$t=1$, and $t=2$ on the continuous action \MarsRover\ problem.  For $x \in [-2,2]$, the
rover automatically takes a picture and receives a reward quadratic in
$x$.  We initialized $V^0(x,b) = R(x,b)$; for $V^1(x)$, the rover achieves
non-zero value up to $x = \pm 12$ and for 
$V^2(x)$, up to $x = \pm 22$.}
\label{fig:opt_graph}

\end{minipage}
%\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{figure}[t!]
%\centering
%\subfigure{
%\hspace{-1mm}
\begin{minipage}[b]{0.53\linewidth}
\includegraphics[width=0.9\textwidth]{Figures1/camdp/v2_mr_dd.pdf}
\vspace{2mm}

\caption{\footnotesize Optimal value function $V^1(x)$ for the
continuous action \MarsRover\ problem represented as an extended algebraic decision
diagram (XADD).  Here the solid lines represent the $\true$ branch for
the decision and the dashed lines the $\false$ branch.  To evaluate
$V^1(x)$ for any state $x$, one simply traverses the diagram in a
decision-tree like fashion until a leaf is reached where the
non-parenthetical expression provides the \emph{optimal value} and the
parenthetical expression provides the \emph{optimal policy} 
($a = \pi^{*,1}(x)$) to achieve value $V^1(x)$.}
\label{fig:opt_val_pol}
\vspace{-3mm}
\end{minipage}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Argument

If our objective is to maximize the long-term \emph{value} $V$ (i.e.,
the sum of rewards received over an infinite horizon of actions), then
we can write the optimal value achievable from a given state in \textsc{CSMR} 
as a function of state variables:
{\footnotesize
%\begin{tabular}{l}
%$
\begin{align}
V = \begin{cases}
\neg \mathit{take\-pic}_1 \land \mathit{take\-pic}_2 \land (4 -x^2 -y^2\geq 0) \land 
(5 - x^2 - y^2\geq 0) : & 4 -x^2 -y^2 \\
\mathit{take\-pic}_1 \land \neg \mathit{take\-pic}_2 \land (2 -x^2 -y^2\geq 0) \land 
(3 - x^2 - y^2\geq 0) : & 2 -x^2 -y^2 \\
\neg \mathit{take\-pic}_1 \land \mathit{take\-pic}_2 \land (4 -x^2 -y^2\geq 0) \land 
(5 - x^2 - y^2\leq 0) : & -1 \\
\mathit{take\-pic}_1 \land \neg \mathit{take\-pic}_2 \land (2 -x^2 -y^2\geq 0) \land 
(3 - x^2 - y^2\leq 0) : & -1 \\
else &: 0 \\
\end{cases} \label{eq:vfun_rover1}
%$
%\end{tabular}
\end{align}
}
The value function is piecewise and non-linear, and it contains decision
boundaries like $4 -x^2 -y^2\geq 0$ that are clearly non-rectangular;
rectangular boundaries are restricted to conjunctions of simple
inequalities of a continuous variable and a constant (e.g., $x \leq
5 \land y > 2$). 

In the literature it seems that it has not been clear what value function representation supports closed-form computation of the Bellman backup (regression and maximization operations) for general DC-MDP transition and reward structures.  These questions have
been affirmatively addressed for the subset of DC-MDPs with transition
functions that are mixtures of delta functions and reward functions
that are hyper-rectangular piecewise linear, which provably lead to value
functions of the same structure \cite{feng04,li05}.  However, the
literature appears to lack a solution to this problem when, for
example, the reward instead uses piecewise nonlinear functions with
linear or nonlinear boundaries, leading to value functions of similar
structure.

Also to get a sense of the form of the optimal solution to continuous action problems such as\textsc{CSAMR} , we present the 0-, 1-, and 2-step time horizon solutions
for this problem in Figure~\ref{fig:opt_graph}; further, in symbolic
form, we display both the 1-step time horizon value function (the
2-step is too large to display) \emph{and} corresponding optimal
policy in Figure~\ref{fig:opt_val_pol}.  Here, the piecewise nature of
the transition and reward function leads to piecewise
structure in the value function and policy.  Yet despite the intuitive and
simple nature of this result, we are unaware of prior methods that can
produce such exact solutions.
%%%%%%%%%%%
%conclude

In this paper, we propose novel ideas to workaround some of the 
expressiveness limitations of previous approaches and 
significantly generalize the range of DC-MDPs that
can be solved exactly.  To achieve this more general solution, this
paper contributes a number of important advances:
\begin{itemize}
\item The use of conditional stochastic  equations
facilitates symbolic regression of the value function via
substitutions.  This is precisely the motivation behind symbolic
dynamic programming (SDP) \cite{fomdp} used to solve MDPs with
transitions and reward functions defined in first-order logic, except
that in prior SDP work, only piecewise constant functions have been used;
in this work we introduce techniques for working with \emph{arbitrary} 
piecewise symbolic functions.
\item We show how the continuous action maximization step in the dynamic programming
backup can be evaluated optimally and symbolically --- a task which
amounts to \emph{symbolic} constrained optimization subject to
unknown state parameters.
\item While the \emph{case} representation for the optimal \textsc{CSMR} 
solution shown in \eqref{eq:vfun_rover1} is sufficient in theory to
represent the optimal value functions that our DC-MDP solution
produces, this representation is unreasonable to maintain in practice
since the number of case partitions may grow exponentially on
each receding horizon control step.  For \emph{discrete} factored
MDPs, algebraic decision diagrams (ADDs) \cite{bahar93add} have been
successfully used in exact algorithms like SPUDD \cite{spudd} to
maintain compact value representations.  Motivated by this work we
introduce extended ADDs (XADDs) to compactly represent general
piecewise functions and show how to perform efficient operations on
them \emph{including} symbolic maximization.  Constraint-based pruning techniques similar to  \cite{penberthy94} can be applied when XADDs meet certain expressiveness
restrictions. Also we prove minimality of the XADD and present the properties and algorithms required for our approach. 
\end{itemize}

Aided by these algorithmic and data structure advances, 
we empirically demonstrate that our SDP approach
with XADDs can exactly solve a variety of DC-MDPs with continuous states and actions. 

%%%%%%%


\vskip 0.2in
\bibliography{exactsdp}
\bibliographystyle{theapa}

\end{document}






