\documentclass{article} % For LaTeX2e
\usepackage{nips12submit_e,times}
%\documentstyle[nips12submit_09,times,art10]{article} % For LaTeX 2.09

\renewcommand{\vec}[1]{\mathbf{#1}} % Bold vectors
\def\argmax{\operatornamewithlimits{arg\max}}

\title{Symbolic Dynamic Programming for Continuous State and Observation POMDPs}


\author
{
Zahra Zamani\\ %\& Scott Sanner\\
ANU \& NICTA\\
Canberra, Australia\\
{\small \texttt{zahra.zamani@anu.edu.au}}\\
%{\small \texttt{first.last@anu.edu.au}}\\
\And
Scott Sanner\\
NICTA \& ANU\\
Canberra, Australia\\
{\small \texttt{scott.sanner@nicta.com.au}}\\
\AND
Pascal Poupart\\
U. of Waterloo\\
Waterloo, Canada\\
{\small \texttt{ppoupart@uwaterloo.ca}}\\
\And 
Kristian Kersting\\
Fraunhofer IAIS \& U. of Bonn\\
Bonn, Germany\\
%{\small \texttt{first.last@ubonn.de???}}\\
%{\small \texttt{first.last@iais.fraunhofer.de}}\\
{\small \texttt{kristian.kersting@iais.fraunhofer.de}}\\
}

% The \author macro works with any number of authors. There are two commands
% used to separate the names and addresses of multiple authors: \And and \AND.
%
% Using \And between authors leaves it to \LaTeX{} to determine where to break
% the lines. Using \AND forces a linebreak at that point. So, if \LaTeX{}
% puts 3 of 4 authors names on the first line, and the last on the second
% line, try using \AND instead of \And before the third author name.

\newcommand{\xds}{\mathbf{x}_s,\!\mathbf{d}_s}
\newcommand{\xdsp}{\mathbf{x}_s',\!\mathbf{d}_s'}
%\newcommand{\xds}{\mathbf{dx}_s}
%\newcommand{\xdsp}{\mathbf{dx}_s'}
\newcommand{\xdo}{\mathbf{x}_o,\!\mathbf{d}_o}
%\newcommand{\xdo}{\mathbf{dx}_o}
\newcommand{\open}{\mathit{open}}
\newcommand{\close}{\mathit{close}}
\newcommand{\high}{\mathit{high}}
\newcommand{\low}{\mathit{low}}
\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
\renewcommand{\l}{\langle}
\renewcommand{\r}{\rangle}
\newcommand{\casemax}{\mathrm{casemax}}

\nipsfinalcopy % Uncomment for camera-ready version

\begin{document}

\maketitle

\begin{abstract}
%Partially-observable Markov decision processes (POMDPs) provide a
%powerful model for real-world sequential decision-making problems.  
Point-based value iteration (PBVI) methods have
proven extremely effective for finding
(approximately) optimal dynamic programming solutions to
partially-observable Markov decision processes (POMDPs) when a 
set of initial belief states is known.  However, no PBVI work has
provided \emph{exact point-based backups for both continuous state and
observation spaces}, which we tackle in this paper.  Our key insight is
that while there may be an infinite number of observations,
there are only a finite number of continuous observation partitionings
that are relevant for optimal decision-making when a finite, fixed set
of reachable belief states is considered.  To this end, we make two
important contributions: (1) we show how previous exact symbolic
dynamic programming solutions for continuous state MDPs can be
generalized to \emph{continuous state POMDPs with discrete observations}, and
(2) we show how recently developed symbolic integration methods
allow this solution to be extended 
to PBVI for \emph{continuous state and observation POMDPs} with
potentially correlated, multivariate continuous observation spaces.
%We demonstrate a proof-of-concept implementation on power plant regulation.
\end{abstract}

\section{Introduction} %and Related Work}
% Write intro here

Partially-observable Markov decision processes (POMDPs) are a powerful
modeling formalism for real-world sequential decision-making
problems~\cite{kaebling}.  In recent years, point-based value
iteration methods (PBVI)~\cite{pbvi_jair06,hsvi2,Perseus,gapmin} have proved
extremely successful at scaling (approximately) optimal POMDP
solutions to large state spaces when a set of initial belief states is
known.

While PBVI has been extended to both continuous state and continuous
observation spaces, no prior work has tackled both jointly without sampling.
\cite{Perseus_cont} provides exact point-based backups for continuous
state and discrete observation problems (with approximate sample-based
extensions to continuous actions and observations),
while~\cite{pascal_ijcai05} provides exact point-based backups (PBBs)
for discrete state and continuous observation problems (where
multivariate observations must be conditionally independent).  While
restricted to discrete states, \cite{pascal_ijcai05} provides an
important insight that we exploit in this work: \emph{only a finite
  number of partitionings of the observation space are required to
  distinguish between the optimal conditional policy over a finite set
  of belief states}.

We propose two major contributions:  First, we extend
symbolic dynamic programming for continuous state
MDPs~\cite{sanner_uai11} to POMDPs with discrete 
observations, 
%This provides an expressive and concrete
%instantiation of the framework in~\cite{Perseus_cont} (which only
%abstractly required that integrals were computable) in that it ensures
%the closed-form of all $\alpha$-functions for \emph{all horizons} is a
%symbolic piecewise case statement, even for POMDPs 
\emph{arbitrary} continuous reward and transitions with discrete noise
(i.e., a finite mixture of deterministic transitions).  Second, we
extend this symbolic dynamic programming algorithm to PBVI and the case of
continuous observations (while restricting 
transition dynamics to be piecewise linear with discrete noise, rewards to be
piecewise constant, and observation probabilities and 
beliefs to be uniform) by building 
on~\cite{pascal_ijcai05} to \emph{derive} relevant observation
partitions for potentially correlated, multivariate continuous
observations. % spaces. 

%by exploiting the piecewise polynomial integration
%operation of~\cite{sanner_aaai12} and the multivariate symbolic
%maximization technique of~\cite{sanner_uai11}.  We conclude by
%demonstrating our algorithm on a power plant control problem requiring
%bivariate continuous state and observations.

%We proceed as follows: after reviewing POMDPs, we discuss the need for 
%first-order POMDPs, formalize them, and provide a lifted solution
%via symbolic dynamic programming (SDP).  We empirically show the
%complexity of SDP is invariant to domain size while enumerated
%state POMDP solvers have complexity exponential in the domain size.

\section{Hybrid POMDP Model} 

\label{sec:model}

%% TODO: further restrictions on observations!  also discrete problems!
%We assume familiarity with MDPs and
%We introduce discrete and continuous
%partially observable MDPs (H-POMDPs) as an extension to 
%DC-MDPs~\cite{sanner_uai11}.  
A \emph{hybrid} (discrete and continuous)
\emph{partially observable MDP} (H-POMDP) is a tuple $\langle
\mathcal{S},\mathcal{A},\mathcal{O},\mathcal{T},\mathcal{R},\mathcal{Z},\gamma,h
\rangle$.  States $\mathcal{S}$ are given by vector 
$(\vec{d}_s,\vec{x}_s) = (
d_{s_1},\ldots,d_{s_n},x_{s_1},\ldots,x_{s_m} )$ where each $d_{s_i}
\in \{ 0,1 \}$ ($1 \leq i \leq n$) is boolean and each
$x_{s_j} \in \mathbb{R}$ ($1 \leq j \leq m$) is continuous.
We assume a finite, discrete action space $\mathcal{A} = \{ a_1,
\ldots, a_r \}$. Observations
$\mathcal{O}$ are given by the vector $(\vec{d}_o,\vec{x}_o) = (
d_{o_1},\ldots,d_{o_p},x_{o_1},\ldots,x_{o_q} )$ where each $d_{o_i}
\in \{ 0,1 \}$ ($1 \leq i \leq p$) is boolean and each $x_{o_j} \in
\mathbb{R}$ ($1 \leq j \leq q$) is continuous.

Three functions are required for modeling H-POMDPs: (1) $\mathcal{T}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow  [ 0, 1 ]$ a Markovian transition model defined as the probability of the next state %$(\vec{d}_s',\vec{x}_s')$ 
given the action and previous state%
%$(\vec{d}_s,\vec{x}_s)$ 
%$p(\vec{d}',\vec{x}'|\cdots,a)$
; (2)  $\mathcal{R}:\mathcal{S}\times\mathcal{A} \rightarrow \mathbb{R}$ a reward function which returns the immediate reward of taking an action in some state; and (3) an observation function defined as $\mathcal{Z} : \mathcal{S} \times \mathcal{A} \times \mathcal{O} \rightarrow [ 0, 1 ]$  which gives the probability of an observation given the outcome of a state after executing an action.  A discount factor $\gamma, \; 0 \leq \gamma \leq 1$ is used to discount rewards $t$ time steps into the future by $\gamma^t$.

We use a dynamic Bayes net (DBN)\footnote{We disallow general 
  synchronic arcs for simplicity of exposition but
  note their inclusion only places restrictions on the variable
  elimination ordering used during the dynamic programming backup
  operation.} to compactly represent the transition model $\mathcal{T}$ over the
factored state variables and we use a two-layer Bayes net to
represent the observation model $\mathcal{Z}$: {\footnotesize
\begin{align}
\mathcal{T}: \;\; &
%p(\vec{d}_s',\vec{x}_s'|\vec{d}_s,\vec{x}_s,a) = 
p(\xdsp|\xds,a) = 
\prod_{i=1}^n p(d_{s_i}'|\xds,a) \prod_{j=1}^m p(x_{s_j}'|\xds, \vec{d}_s',a). \label{eq:trans_model} \\
\mathcal{Z}: \;\; & 
%p(\vec{d}_o,\vec{x}_o|\vec{d}_s,\vec{x}_s,a) = 
p(\xdo|\xdsp,a) = 
\prod_{i=1}^p p(d_{o_i}|\xdsp,a) \prod_{j=1}^q p(x_{o_j}|\xdsp,a). \label{eq:obs_model}
\end{align}}
%% TRANSITION LINEAR, BELIEF, REWARD, OBSERVATION PWC???
Probabilities over \emph{discrete} variables $p(d_{s_i}'\!|\xds,\!a)$ and
$p(d_{o_i}\!|\xdsp,\!a)$ may condition on both discrete variables and
(nonlinear) inequalities of continuous variables; this is further
restricted to linear inequalities in the case of continuous
observations.  Transitions over \emph{continuous}
variables $p(x_{s_j}'\!|\xds,\!\vec{d}_s',\!a)$ must be deterministic (but
arbitrary nonlinear) piecewise functions;
%encoded using the Dirac $\delta$ function; 
in the case of continuous observations they are further restricted to
be piecewise linear; this permits discrete noise in the continuous 
transitions since they may condition on stochastically sampled
discrete next-state variables $\vec{d}_s'$.
%(hence allowing discrete noise, but not, e.g., Gaussian noise).
Observation probabilities over continuous variables $p(x_{o_j}\!|\xdsp,\!a)$ 
only occur in the case of continuous observations and are required to be
piecewise constant (a mixture of uniform distributions); the same
restriction holds for belief state representations.
The reward $R(\vec{d},\vec{x},a)$ may be 
an arbitrary (nonlinear) piecewise function in the case of
deterministic observations and a piecewise constant function in the
case of continuous observations.  
We now provide concrete examples.
%To make this concrete, we now provide examples of both discrete and
%continuous observation H-POMDPs.

\textbf{Example} \textsc{\bf (Power Plant)~\cite{steam2}} \emph{The steam
generation system of a power plant evaporates feed-water under restricted 
pressure and temperature conditions to turn a steam turbine.
A reward is obtained when electricity is generated from the turbine 
and the steam pressure and temperature are within safe ranges.
Mixing water and steam makes the
respective pressure and temperature observations $p_o \in \mathbb{R}$
and $t_o \in \mathbb{R}$ on the underlying state $p_s \in \mathbb{R}$
and $t_s \in \mathbb{R}$ highly uncertain.  Actions $A = \{ \open, \close \}$
control temperature and pressure by means of a pressure valve.}

We initially present two H-POMDP variants labeled \textsc{\bf 1D-Power
  Plant} using a single temperature state variable $t_s$.  
The transition and reward are common to both ---
temperature increments (decrements) with a closed (opened) valve, a
large negative reward is given for a closed valve with $t_s$ exceeding
critical threshold $15$, and positive reward is given for a safe, 
electricity-producing state:
% For the transition probability, we use the Dirac function to model the deterministic equations. We can also define stochasticity in the transition model using boolean random variables that are sampled stochastically.  The reward function can be any linear function of the state or action. Going above a threshold temperature (e.g. $T=10$) will cause an explosion in the plant when trying to close the valve which results in the negative reward of $-1000$. Staying below this temperature is safe and will produce electricity an gain the reward of $100$. The reward of an open valve is $-1$.
{\footnotesize
\vspace{-1mm}
\begin{align}
\label{eq:trans}
p(t_s'|t_s,a)= \delta\left[ t_s' - 
\begin{cases}
 (a=\open) &: t_s - 5 \\ 
(a = \close) &: t_s + 7 \\
\end{cases}
\right]
\hspace{2mm}
R(t_s,a) = 
\begin{cases}
 (a=\open) &: -1 \\
(a = \close)\wedge (t_s>15) &: -1000 \\
(a = \close)\wedge \neg(t_s>15) &: 100 \\
\end{cases} 
\end{align}
\vspace{-4mm}
}

%contribution? 
Next we introduce the \textsc{\bf Discrete Obs. 1D-Power Plant} variant where
we define an \emph{observation space with a single discrete binary 
variable} $o \in \mathcal{O} = \{\high,\low\}$:
{\footnotesize
\vspace{-1mm} 
\begin{align}
\hspace{-2.5mm} p(o=\high|t_s',a=\open) = 
\begin{cases}
  t_s' \leq 15 &: 0.9 \\
  t_s' > 15    &: 0.1 \\
\end{cases}
\;\;\;\;
p(o=\high|t_s',a=\close) = 
\begin{cases}
 t_s' \leq 15 &: 0.7 \\
 t_s' > 15    &: 0.3 \\
\end{cases} \label{eq:ex_disc_obs}
\end{align}
\vspace{-4mm}
}
%\\
%p(t_{o_1}|t_s',\close) = 
%\begin{cases}
% \delta\left[ t_s \leq 15 \right] &: 0.1 \\
% \delta\left[ \neg (t_s \leq 15)\right] &: 0.8 \\
%\end{cases}\nonumber
%,
%p(t_{o_2}|t_s',\close) = 
%\begin{cases}
% \delta\left[ t_s \leq 15 \right] &: 0.9 \\
% \delta\left[ \neg (t_s \leq 15)\right] &: 0.2 \\
%\end{cases} \nonumber
%\end{align}

Finally we introduce the \textsc{\bf Cont. Obs. 1D-Power Plant}
variant where we define an \emph{observation space with
a single continuous variable} $t_o$ uniformly distributed on
an interval of 10 units centered at $t_s'$.
{\footnotesize
\vspace{-1mm}
\begin{align}
p(t_o|t_s',a=\open) = U(t_o;t_s' - 5, t_s' + 5) = 
\begin{cases}
 (t_o>t_s'-5) \wedge (t_o<t_s'+5)         &: 0.1 \\
 (t_o \leq t_s'-5) \vee (t_o \geq t_s'+5) &: 0 \\
\end{cases} \label{eq:ex_cont_obs}
\end{align}
\vspace{-4mm}
}
While simple, we note no prior method could perform exact point-based backups for either problem.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Figure 1 - policy tree
\begin{figure}[t!]
%\vspace{-1mm}
\begin{center}
\includegraphics[width=0.4\textwidth]{pics/cond_plan2.pdf}
\hspace{10mm}
\includegraphics[width=0.45\textwidth]{pics/dag2.pdf}
\end{center}
\vspace{-2mm}
\caption{\footnotesize (left) Example conditional plan $\beta^h$ for
discrete observations; (right) example $\alpha$-function for $\beta^h$
over state $b \in \{0,1\}, x \in \mathbb{R}$ in decision
diagram form: the \emph{true} (1) branch is solid, the \emph{false} (0) branch is
dashed.}
\label{fig:cond_plan}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{figure}[t]
%\begin{center}
%\includegraphics[width=0.4\textwidth]{pics/cond_plan2.pdf}
%\end{center}
%\vspace{-3mm}
%\caption{%\footnotesize 
%The optimal value function for \textsc{\bf Power Plant}
%as a decision diagram: 
%the \emph{true} branch is solid, the \emph{false}
%branch is dashed.} 
%\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Value Iteration for Hybrid POMDPs}

\label{sec:vi}

In an H-POMDP, the agent does not directly observe the states and thus
must maintain a belief state $b(\xds) = p(\xds)$.  For a
given belief state $\vec{b} = b(\xds)$, a POMDP policy $\pi$ can be
represented by a tree corresponding to a conditional plan $\beta$.  An
h-step conditional plan $\beta^h$ can be defined recursively in terms
of $(h-1)$-step conditional plans as shown in
Fig.~\ref{fig:cond_plan} (left).
Our goal is to find a policy $\pi$ that maximizes the value function,
defined as the sum of expected discounted rewards over horizon $h$
starting from initial belief state $\vec{b}$:
{\footnotesize
\vspace{-1mm}
\begin{equation}
V^h_\pi(\vec{b}) = E_{\pi} \left[ \sum\nolimits_{t=0}^{h} \gamma^t \cdot r_t \Big| \vec{b}_0 = \vec{b} \right]
\end{equation}
\vspace{-4mm}
}

where $r_t$ is the reward obtained at time $t$ and $\vec{b}_0$ is the
belief state at $t=0$.  For finite $h$ and belief state $\vec{b}$, the
optimal policy $\pi$ is given by an $h$-step conditional plan
$\beta^h$.  For $h = \infty$, the optimal discounted ($\gamma < 1$)
value can be approximated arbitrarily closely by 
a sufficiently large $h$~\cite{kaebling}.  

Even when the state is continuous (but the actions and observations
are discrete), 
the optimal POMDP value function for finite horizon $h$ is a piecewise linear and
convex function of the belief state $\vec{b}$~\cite{Perseus_cont}, hence 
$V^h$ is given by a maximum over a finite set of
``$\alpha$-functions'' $\alpha^h_i$:
{\footnotesize 
\vspace{-1mm}
\begin{equation}
V^h(\vec{b}) = \max_{\alpha^h_i \in \Gamma^h} \l \alpha^h_i, \vec{b} \r = \max_{\alpha^h_i \in \Gamma^h} \int_{\vec{x}_s} \sum_{\vec{d}_s} \alpha^h_i(\xds) \cdot \vec{b}(\xds) \; d\vec{x}_s 
\end{equation}
\vspace{-4mm}
}

Later on when we tackle continuous state \emph{and} observations,
we note that we will dynamically derive an optimal, 
finite partitioning of the observation
space for a given belief state and hence reduce the continuous
observation problem back to a discrete observation problem at every
horizon.  

The $\Gamma^h$ in
this optimal $h$-stage-to-go value function can be computed via
Monahan's dynamic programming approach to \emph{value iteration}
(VI)~\cite{monahan82}.  Initializing $\alpha^0_1 = \vec{0}$, 
$\Gamma^0 = \{ \alpha^0_1 \}$, and assuming discrete 
observations $o \in \mathcal{O}^h$,
$\Gamma^h$ is obtained from
$\Gamma^{h-1}$ as follows:\footnote{The
  $\textrm{\large $\boxplus$}$ of sets is defined as $\textrm{\large
    $\boxplus$}_{j \in \{ 1,\ldots, n \} } S_j = S_1 \textrm{\large
    $\boxplus$} \cdots \textrm{\large $\boxplus$} S_n$ where the
  pairwise cross-sum $P \textrm{\large $\boxplus$} Q = \{ \vec{p} +
  \vec{q} | \vec{p} \in P, \vec{q} \in Q \}$.}  
{\footnotesize
\vspace{-1mm}
\begin{align} 
g^h_{a,o,j}(\xds) &=  \int_{\vec{x}_{s'}} \sum_{\vec{d_{s'}}} p(o|\xdsp,a)p(\xdsp|\xds,a) \alpha^{h-1}_j(\xdsp) d\vec{x}_{s'}; \hspace{2mm}  \forall \alpha^{h-1}_{j} \in \Gamma^{h-1} \label{eq:backup} \\
\Gamma^{h}_a   &= R(\xds,a) + \gamma \textrm{\large $\boxplus$}_{o \in \mathcal{O}} \left\{ g^h_{a,o,j}(\xds) \right\}_j  \label{eq:cross_prod}\\ 
\Gamma^h  &= \bigcup_a \Gamma^h_a 
\end{align}
\vspace{-4mm}
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\incmargin{.5em}
\linesnumbered
\begin{algorithm}[t!]
\footnotesize
\vspace{-.5mm}
\dontprintsemicolon
\SetKwFunction{backup}{Backup}
\SetKwFunction{genObs}{GenRelObs}
\SetKwFunction{prune}{Prune}
\SetKwFunction{remapWithPrimes}{Prime}
\Begin
{
   $V^0:=0, h:=0, \Gamma_{PBVI}^0 = \{ \alpha_1^0 \} $\;
   \While{$h < H$}
   {
       $h:=h+1, \Gamma^h :=\emptyset, \Gamma_{PBVI}^h :=\emptyset$\;
       \ForEach {$\vec{b}_i \in B$}
       {
       	\ForEach {$a \in A$}
      	 {
			$\Gamma_{a}^h :=\emptyset$ \;       		
       		\eIf {(continuous observations: $q > 0$)}
       	         {\emph{// Derive relevant observation partitions $\mathcal{O}_i^h$ for belief $\vec{b}_i$} \;
			  $\l \mathcal{O}_i^h,p(\mathcal{O}_i^h|\xdsp,a) \r \,:=\,$ \genObs{$\Gamma_{PBVI}^{h-1},a,\vec{b}_i$}\;}
                 {\emph{// Discrete observations and model already known}\;
                          $\mathcal{O}_i^h := \{ \vec{d}_o \}$; $p(\mathcal{O}_i^h|\xdsp,a) := $ see Eq~\eqref{eq:obs_model}\;}
       		 \ForEach {$o \in \mathcal{O}_i^{h}$}
       		 {
				\ForEach {$\alpha_j^{h-1} \in \Gamma_{PBVI}^{h-1}$}
       			{
   	 		  		$\alpha_j^{h-1} :=\,$ \remapWithPrimes{$\alpha_j^{h-1}$} 
   	 		  		\emph{// $\forall d_i$: $d_i \to d_i'$ and $\forall x_i$: $x_i \to x_i'$} \; 
%   	 		    	$\Gamma_{a,xd_{o_i},j}^h \,:=\, p(o|xd_{s}) \cdot$ \backup{$\alpha_j^{h},a$}\;
   	 		    	{$g_{a,o,j}^h \,:=\, $ see Eq~\eqref{eq:backup}}
       	      	}
%       	      	$\Gamma_{a,\xdo}^h\,:=\,\textrm{\large $\boxplus$} \Gamma_{a,xd_{o_i}}^h$\;
%       	      	$\Gamma_{a,\xdo}^h\,:=\, $ see Eq~\eqref{eq:cross_prod}\;
       	     }
%           $\Gamma_a^{h} \,:=\,R_a \oplus \gamma \cdot \Gamma_{a,\xdo}^h$\;
            $\Gamma_a^{h} \,:=\, $ see Eq~\eqref{eq:cross_prod}\;
            $\Gamma^{h} \,:=\, \Gamma^{h} \cup \Gamma_a^{h}$\;
       	 }
        	%monahan's pruning first generates all vectors, then prunes
                %but not for PBVI since we only retain alpha-functions provably dominant at a belief point
%              $\Gamma^h \,:=\, $\prune{$\Gamma^h$} \emph{// optional strict dominance testing of $\alpha$-functions}\; 
      }
             % $V^h \,:=\, \mathrm{max}_{\alpha_j \in \Gamma^h} \vec{b}_i \cdot \alpha_j$\;
             % $\pi^{*,h} \,:=\, \argmax_{a} \, \Gamma_a^{h}$\;
      \emph{// Retain only $\alpha$-functions optimal at each belief point}\;
      \ForEach {$\vec{b}_i \in B$}
      {
     	$\alpha_{\vec{b}_i}^h :=\ \argmax_{\alpha_j \in \Gamma^h} \alpha_j \cdot \vec{b}_i$ $\,$\;
     	$\Gamma_{PBVI}^h :=\ \Gamma_{PBVI}^h \cup \alpha_{\vec{b}_i}^h$\;
      }

        \emph{// Terminate if early convergence}\;
       \If{$\Gamma_{PBVI}^h = \Gamma_{PBVI}^{h-1}$}
           {break $\,$\;}
   }
     \Return{$\Gamma_{PBVI}$} \;
}
\caption{\footnotesize \texttt{PBVI}(H-POMDP, $H$, $B=\left\{\vec{b}_i \right\}$) $\longrightarrow$ $\l V^h \r$ \label{alg:vi}}
\vspace{-1mm}
\end{algorithm}
\decmargin{.5em}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\textbf{Point-based value iteration (PBVI)}~\cite{pbvi_jair06,Perseus} 
computes the value
function only for a set of belief states $\{ \vec{b}_i \}$ where
$\vec{b}_i := p(\xds)$.  The idea is straightforward and the main
modification needed to Monahan's VI approach in Algorithm~\ref{alg:vi}
is the loop from lines 23--25 where
only $\alpha$-functions optimal at some belief state are retained for
subsequent iterations.
In the case of continuous observation
variables ($q > 0$), we will need to derive a relevant set of
observations on line 10, a key contribution of this work as described
in Section~\ref{sec:cont_obs}.  Otherwise if the observations are only 
discrete ($q=0$), then a finite set of observations is already known and
the observation function as given in Eq~\eqref{eq:obs_model}.

We remark that Algorithm~\ref{alg:vi} is a generic framework that 
can be used for both PBVI and other variants of 
approximate VI.  If used for PBVI, an 
efficient direct backup computation of the optimal $\alpha$-function for belief 
state $\vec{b}_i$ should be used in line 17 
that is linear in the number of observations~\cite{pbvi_jair06,Perseus}
and which obviates the need for lines 23--25.  
However, for an alternate version of approximate value iteration that will often
produce more accurate values for belief states other than those in $B$, 
one may instead retain the full cross-sum backup
of line 17, but omit lines 23--25 --- 
this yields an approximate VI approach 
(using discretized observations 
relevant only to a chosen set of belief states $B$ if continuous observations
are present) that is not restricted to alpha-functions only optimal at $B$, 
hence allowing greater flexibility
in approximating the value function over all belief states.%}  

Whereas PBVI is optimal if all reachable belief states within horizon
$H$ are enumerated in $B$, in the H-POMDP setting, the generation of
continuous observations will most often lead to an infinite number of
reachable belief states, even with finite horizon --- this makes
it quite difficult to provide optimality guarantees in the general case of PBVI
for continuous observation settings.  Nonetheless, PBVI has been quite
successful in practice without exhaustive enumeration of all reachable
beliefs~\cite{pbvi_jair06,hsvi2,Perseus,gapmin}, which motivates our
use of PBVI in this work.

\section{Symbolic Dynamic Programming} 

In this section we take a symbolic dynamic programming (SDP) approach
to implementing VI and PBVI as defined in the last section.  To do this,
we need only show that all required operations can be computed efficiently
and in closed-form, which we do next, building on SDP for 
MDPs~\cite{sanner_uai11}.  

\subsection{Case Representation and Extended ADDs}
\label{sec:case}

% operations, max, restrict, substitute
%overview + example plant
The previous \textsc{\bf Power Plant} examples represented all functions in case form,
generally defined as {\footnotesize
\vspace{-1mm}
\begin{align}
f = 
\begin{cases}
  \phi_1: & f_1 \\ 
 \vdots&\vdots\\ 
  \phi_k: & f_k \\ 
\end{cases} \nonumber
\end{align}
\vspace{-4mm}
}

and this is the form we use to represent all functions in an H-POMDP.
The $\phi_i$ are disjoint logical formulae defined over $\xds$ and/or $\xdo$ with logical ($\land,\lor,\neg$) combinations of boolean variables and inequalities ($\geq,>,\leq,<$) over continuous variables.  
For discrete observation H-POMDPs, the $f_i$ and inequalities may use any function (e.g., $\sin(x_1) > \log(x_2)\cdot x_3)$; for continuous observations, they are restricted to linear inequalities and linear or piecewise constant $f_i$ as described in Section~\ref{sec:model}.

For \emph{unary operations} such as scalar multiplication $c\cdot f$ (for some constant $c \in \mathbb{R}$) or negation $-f$ on case statements is simply to apply the operation on each case partition $f_i$ ($1 \leq i \leq k$). 
A \emph{binary operation} on two case statements, takes the cross-product of the logical partitions of each case statement and performs the corresponding operation on the resulting paired partitions.  The cross-sum $\oplus$ of two cases is defined as the following:
{\footnotesize 
\vspace{-4mm}
\begin{center}
\begin{tabular}{r c c c l}
&
\hspace{-6mm} 
  $\begin{cases}
    \phi_1: & f_1 \\ 
    \phi_2: & f_2 \\ 
  \end{cases}$
$\oplus$
&
\hspace{-4mm}
  $\begin{cases}
    \psi_1: & g_1 \\ 
    \psi_2: & g_2 \\ 
  \end{cases}$
&
\hspace{-2mm} 
$ = $
&
\hspace{-2mm}
  $\begin{cases}
  \phi_1 \wedge \psi_1: & f_1 + g_1 \\ 
  \phi_1 \wedge \psi_2: & f_1 + g_2 \\ 
  \phi_2 \wedge \psi_1: & f_2 + g_1 \\ 
  \phi_2 \wedge \psi_2: & f_2 + g_2 \\ 
  \end{cases}$
\end{tabular}
\end{center}
\vspace{-2mm}
}
Likewise $\ominus$ and $\otimes$ are defined by subtracting or multiplying partition values.  Inconsistent partitions can be discarded when they are irrelevant to the function value.
A \emph{symbolic case maximization} is defined as below:
\vspace{-4mm}
{\footnotesize
\vspace{-2mm}
\begin{center}
\begin{tabular}{r c c c l}
&
\hspace{-7mm} $\casemax \Bigg(
  \begin{cases}
    \phi_1: \hspace{-2mm} & \hspace{-2mm} f_1 \\ 
    \phi_2: \hspace{-2mm} & \hspace{-2mm} f_2 \\ 
  \end{cases}$
$,$
&
\hspace{-4mm}
  $\begin{cases}
    \psi_1: \hspace{-2mm} & \hspace{-2mm} g_1 \\ 
    \psi_2: \hspace{-2mm} & \hspace{-2mm} g_2 \\ 
  \end{cases} \Bigg)$
&
\hspace{-4mm} 
$ = $
&
\hspace{-4mm}
  $\begin{cases}
  \phi_1 \wedge \psi_1 \wedge f_1 > g_1    : & \hspace{-2mm} f_1 \\ 
  \phi_1 \wedge \psi_1 \wedge f_1 \leq g_1 : & \hspace{-2mm} g_1 \\ 
  \phi_1 \wedge \psi_2 \wedge f_1 > g_2    : & \hspace{-2mm}f_1 \\ 
  \phi_1 \wedge \psi_2 \wedge f_1 \leq g_2 : & \hspace{-2mm} g_2 \\ 
  \vdots & \vdots
%  \phi_2 \wedge \psi_1 \wedge f_2 > g_1    : & \hspace{-2mm} f_2 \\ 
%  \phi_2 \wedge \psi_1 \wedge f_2 \leq g_1 : & \hspace{-2mm} g_1 \\ 
%  \phi_2 \wedge \psi_2 \wedge f_2 > g_2    : & \hspace{-2mm} f_2 \\ 
%  \phi_2 \wedge \psi_2 \wedge f_2 \leq g_2 : & \hspace{-2mm} g_2 \\ 
  \end{cases}$
\end{tabular}
\end{center}
\vspace{-3mm}
}

The following SDP operations on case statements require more detail than can be provided here, hence we refer the reader to the relevant literature:
\begin{itemize}
%\item 
%%{\it Restriction $f|_{\phi}$:}  Takes a function $f$ to restrict only in cases
%%that satisfy some formula $\phi$ as defined in \cite{sanner_uai11}.
\item 
{\it Substitution $f\sigma$:} Takes a set $\sigma$ of variables and their substitutions (which may be case statements themselves), and carries out all variable substitutions~\cite{sanner_uai11}.
\item 
{\it Integration $\int_{x_1} f \; dx_1$:}  There are two forms: If $x_1$ is involved in a $\delta$-function ({\it cf.} the transition in Eq~\eqref{eq:trans}) then the integral is equivalent to a symbolic substitution and can be applied to \emph{any} case statement ({\it cf.} ~\cite{sanner_uai11}). Otherwise, if $f$ is in linearly constrained polynomial case form, then the approach of~\cite{sanner_aaai12} can be applied to yield a result in the same form.
\end{itemize}

%\vspace{-2mm}
%\emph{Restriction} takes a function $f$ to restrict only in cases
%that satisfy some formula $\phi$, which we write as $f|_{\phi}$.  
%This can be done by simply appending $\phi$ to each case partition
%as the left side shows:
%{\footnotesize
%\begin{center}
%\begin{tabular}{r c c l}
%%&
%%\hspace{-6mm} 
%%  $f = \begin{cases}
%%    \phi_1: & f_1 \\ 
%%   \vdots&\vdots\\ 
%%    \phi_k: & f_k \\ 
%%  \end{cases}$
%%&
%&
%\hspace{0mm}
%  $f|_{\phi} = \begin{cases}
%    \phi_1 \land \phi : & f_1 \\ 
%   \vdots&\vdots\\ 
%    \phi_k \land \phi : & f_k \\ 
%  \end{cases}$
%&
%\hspace{15mm}
%$\int_{x_j'} p(x_j'|\cdots) V'^{h} dx_j' = \begin{cases}
%    \phi_1: & V'^{h} \{ x_j' = f_1 \} \\ 
%   \vdots&\vdots\\ 
%    \phi_k: & V'^{h} \{ x_j' = f_k \}  \\ 
%  \end{cases}$
%  \end{tabular}
%\end{center}
%}
%\emph{Symbolic substitution} simply takes a set $\sigma$ of variables and their substitutions, where
%the LHS of the substitution operator $/$ represents the substitution variable and the
%RHS of the $/$ represents the expression that should be substituted in its place.
%Hence to perform a continuous regression on a more general
%representation, we obtain the right side of the above. 

%xadd representation
Case operations yield a combinatorial explosion in size if
na\"{i}vely implemented, hence 
we use the data structure of the \emph{extended algebraic
  decision diagram} (XADD)~\cite{sanner_uai11} as shown in
Figure~\ref{fig:cond_plan} (right) to \emph{compactly} 
represent case statements and \emph{efficiently} support the above
case operations with them.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\incmargin{.5em}
%\linesnumbered
%\begin{algorithm}[t!]
%\vspace{-.5mm}
%\dontprintsemicolon
%\Begin{
%	\emph{any function f has $i$ partitions like $\phi_i:f_i$}\;	
%	\emph{compute UB and LB from $\phi_i$ using constraints on $var$}\;
%    $I=$ \emph{ any $var$-independent $\phi_i$} \;
%    $F = $ \emph{differentiate $f_i$}\;
%    $F = I \otimes [F(UB) - F(LB)]$\;
%    \Return{$F$} \;
%}
%\caption{\footnotesize \texttt{VE}($var,f$) $\longrightarrow$ $F$ }
%\vspace{-1mm}
%\end{algorithm}
%\decmargin{.5em}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{VI for Hybrid State and Discrete Observations} 
\label{sec:disc_obs}

For H-POMDPs with only discrete observations $o
\in \mathcal{O}$ and observation function $p(o|\xdsp,a)$ 
as in the form of Eq~\eqref{eq:ex_disc_obs}, we introduce a symbolic version of
Monahan's VI algorithm.  In brief, we note that all VI operations
needed in Section~\ref{sec:vi} apply \emph{directly} to 
H-POMDPs, e.g., rewriting 
Eq~\eqref{eq:backup}: {\footnotesize
\vspace{-1mm}
\begin{equation}
g^h_{a,o,j}(\xds) \! =  \!\! \int_{\vec{x}_{s'}} \!\! \bigoplus_{\vec{d_{s'}}} \! \left[ p(o|\xdsp,\!a) \! \otimes \!\! \left( \! \bigotimes_{i=1}^n p(d_{s_i}'\!|\xds,\!a) \!\! \right) \!\! \otimes \!\! \left( \! \bigotimes_{j=1}^m p(x_{s_j}'\!|\xds, \vec{d}_s',\!a) \!\! \right) \!\! \otimes \! \alpha^{h\!-\!1}_j(\xdsp) \! \right] \!\! d\vec{x}_{s'} \label{eq:backup_sdp}
\end{equation}
\vspace{-4mm}
}

Crucially we note since the continuous transition cpfs $p(x_{s_j}' \! |\xds, \! \vec{d}_s',\!a)$ are deterministic and hence defined with Dirac $\delta$'s (e.g., Eq~\ref{eq:trans}) as described in Section~\ref{sec:model}, the integral $\int_{\vec{x}_{s'}}\!$ can always be computed in closed case form as discussed in Section~\ref{sec:case}.
In short, nothing additional is required for PBVI on H-POMDPs
in this case --- the key insight
is simply that $\alpha$-functions are now represented by
case statements and can ``grow'' with the horizon as they partition the
state space more and more finely.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\incmargin{.5em}
\linesnumbered
\begin{algorithm}[t!]
\footnotesize 
\vspace{-.5mm}
\dontprintsemicolon
\SetKwFunction{substitute}{Substitute}

\Begin
{
		\ForEach {$\alpha_j(\xdsp) \in \Gamma^{h-1}$ and $a \in A$}    
		{
		\emph{// Perform exact 1-step DP backup of $\alpha$-functions at horizon $h-1$}\\
    	$\alpha^a_j(\xds,\xdo) := \int_{\vec{x}_s'} \bigoplus_{\vec{d}_s'} p(\xdo|\xdsp,a) \otimes p(\xdsp| \xds,a) \otimes \alpha_j(\xdsp) \; d\vec{x}_s'$\;
		}  
		\ForEach {$\alpha^a_j(\xds,\xdo)$}    
		{
		\emph{// Generate value of each $\alpha$-vector at belief point $\vec{b}_i(\xds)$ as a function of observations}\\
		$\delta^a_{j}(\xdo) := \int_{\vec{x}_{s}} \bigoplus_{\vec{d}_s} \vec{b}_i(\xds) \otimes \alpha^a_j(\xds,\xdo) \; d\vec{x}_s$\\ \;
		}
		\emph{// Using $\casemax$, generate observation partitions relevant to each policy -- see text for details}\\
		$\mathcal{O}^h := \mathrm{extract\text{-}partition\text{-}constraints}[ \casemax(\delta^{a_1}_1(\xdo),\delta^{a_2}_1(\xdo),\ldots,\delta^{a_r}_{j}(\xdo))]$\;

	\ForEach {$o_k \in \mathcal{O}^h$}{
    	  \emph{// Let $\phi_{o_k}$ be the partition constraints for observation $o_k \in \mathcal{O}^h$}\\
            $p(\mathcal{O}^h = o_k|\xdsp,a) := \int_{\vec{x}_o} \bigoplus_{\vec{d}_o} p(\xdo|\xdsp,a) \mathbb{I}[\phi_{o_k}] d_{\vec{x}_o}$ \;
%            $p(\mathcal{O}^h = o_k|\xdsp) := \int_{\vec{x}_s'} \int_{\vec{x}_o} \int_{\vec{x}_s} \bigoplus_{\vec{d}_o} \bigoplus_{\vec{d}_s} \bigoplus_{\vec{d}_s'} p(\xdo|\xdsp,a) \otimes p(\xdsp|\xds,a) \otimes \vec{b}_i \otimes  \mathbb{I}[\phi_{o_k}] d_{x_o} d_{x_s}d_{x_s'}$ \;
        }
    \Return{$\l \mathcal{O}^h, p(\mathcal{O}^h|\xdsp,a) \r$} \;
    %do this for each belief in B
}
\caption{\footnotesize \texttt{GenRelObs}($\Gamma^{h-1},a,\vec{b}_i$) $\longrightarrow$ $\l \mathcal{O}^h, p(\mathcal{O}^h|\xdsp,a) \r$ }
\label{alg:genrelobs}
\vspace{-1mm}
\end{algorithm}
\decmargin{.5em}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{PBVI for Hybrid State and Hybrid Observations} 
\label{sec:cont_obs}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure*}[tbp!]
\vspace{-3mm}
\centering
\hspace{-20mm}
\includegraphics[width=0.42\textwidth]{pics/beliefs_2.pdf}
\hspace{10mm}
%\includegraphics[width=0.33\textwidth]{pics/delta_b1.pdf}
%\hspace{-2mm}
\includegraphics[width=0.42\textwidth]{pics/delta_b2_3.pdf}
\hspace{-17mm}
\vspace{-2mm}
\caption{\footnotesize 
{\it (left)} Beliefs $b_1,b_2$ for \textsc{\bf Cont. 1D-Power Plant}; 
%{\it (center)} Observation dependent function for $b_2$ that partition the observation space into 5 regions with different probabilities for $p(o_1),p(o_2)$ ; 
{\it (right)} derived observation partitions for $b_2$ at $h=2$.
%, and optimal conditional policies. This diagrams shows that for temperatures $t_o < 5.1$ the policy is to open the valve and for higher temperatures, it is unsafe to open it and the policy should follow $\delta_{close}(t_o)$ which is to close the valve. 
}
\label{fig:beliefs}
%\vspace{-4mm}
\end{figure*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

In general, it would be impossible to apply standard VI to 
H-POMDPs with continuous observations since the number of observations
is infinite.  However, building on ideas in~\cite{pascal_ijcai05},
in the case of PBVI, it is possible to \emph{derive} a finite set of
continuous observation partitions that permit exact point-based backups
\emph{at a belief point}.
This additional operation (\texttt{GenRelObs}) appears on line 10 of PBVI in 
Algorithm~\ref{alg:vi} in the case of continuous observations and is
formally defined in Algorithm~\ref{alg:genrelobs}.

To demonstrate the generation of relevant continuous observation partitions, 
we use the second iteration of the \textsc{\bf Cont. Obs. 1D-Power
  Plant} along with two belief points represented as 
uniform distributions: $b_1: U(t_s;2,6)$ and $b_2: U(t_s;6,11)$ as
shown in Figure~\ref{fig:beliefs} (left).
Letting $h=2$, we will assume simply for expository purposes that 
$|\Gamma^{1}| = 1$ (i.e., it contains only one $\alpha$-function) and
that in lines 2--4 of Algorithm~\ref{alg:genrelobs} we have computed the 
following two $\alpha$-functions for $a \in \{ \open, \close \}$:
{\footnotesize
\vspace{-2mm}
\begin{align}
\alpha_1^{\close}(t_s,t_o) &= 
\begin{cases}
 (t_s<15)\wedge (t_s \! - \! 10 < t_o<t_s) &\!\!\!: 10 \\
(t_s\geq15)\wedge (t_s \! - \! 10 < t_o<t_s) &\!\!\!: -100  \\
\neg(t_s \! - \! 10 < t_o<t_s) &\!\!\! : 0
\end{cases}
\;\;
\alpha_1^{\open}(t_s,t_o) = \begin{cases}
(t_s \! - \! 10 < t_o<t_s) &\!\!\!: 0.1 \\
\neg(t_s \! - \! 10 < t_o<t_s) &\!\!\!: 0
\end{cases}
\nonumber
\end{align}
\vspace{-3mm}
} 

We now need the $\alpha$-vectors as a function of the observation
space for a particular belief state, thus next we marginalize out $\xds$
in lines 5--7. The resulting $\delta$-functions are shown as follows
where for brevity from this point forward, 0 partitions
are suppressed in the cases:
{\footnotesize
\vspace{-1mm}
\begin{align}
\delta^{\close}_1(t_o) &= 
\begin{cases}
 (14 < t_o< 18) &: 0.025 t_o - 0.45\\
 (8 < t_o< 14) &:  - 0.1\\
 (4 < t_o< 8) &: - 0.025 t_o -0.1\\
\end{cases}
\hspace{5mm} 
\delta^{\open}_1(t_o) = \begin{cases}
 (15 < t_o< 18) &: 25 t_o - 450\\
 (14 < t_o< 15) &: - 2.5 t_o - 37.5\\
 (8 < t_o< 14) &:  -72.5\\
 (5 < t_o< 8) &: - 25 t_o + 127.5\\
 (4 < t_o< 5) &:  2.5 t_o - 10\\
\end{cases}
\nonumber
\end{align}
\vspace{-4mm}
}

Both $\delta^{\close}_1(t_o)$ and $\delta^{\open}_1(t_o)$ are drawn
graphically in Figure~\ref{fig:beliefs} (right).  These observation-dependent 
$\delta$'s divide the observation space into regions
which can yield the optimal policy according to the belief state
$b_2$.  Following \cite{pascal_ijcai05}, we need to find the optimal
boundaries or partitions of the observation space; in their work, numerical
solutions are proposed to find these boundaries in \emph{one dimension}
(multiple observations are handled through an independence assumption).
Instead, here we leverage the symbolic power of
the $\casemax$ operator defined in Section~\ref{sec:case} to find all the
partitions where each \emph{potentially correlated, multivariate} observation
$\delta$ is optimal. For the two $\delta$'s 
above, the following partitions of the observation space are derived
by the $\casemax$ operator in line 9:
{\footnotesize
\vspace{-1mm}
\begin{align}
\casemax \left( \delta^{\close}_1(t_o),\delta^{\open}_1(t_o) \right) &= 
\begin{cases}
o_1: (14 < t_o \leq 18) &: 0.025 t_o - 0.45\\
o_1: (8 < t_o \leq 14) &:  -0.1\\
o_1: (5.1 < t_o \leq  8) &: - 0.025 t_o -0.1\\
o_2: (5 < t_o \leq 5.1) &: - 25 t_o + 127.5\\
o_2: (4 < t_o \leq  5) &:  2.5 t_o - 10\\
\end{cases}
\nonumber
\end{align}
\vspace{-4mm}
}

Here we have labeled with $o_1$ the observations where 
$\delta^\close_1$ is maximal and with $o_2$ the observations where
$\delta^\open_1$ is maximal.
What we really care about though are just the constraints 
identifying $o_1$ and $o_2$ and this is the
task of $\mathrm{extract\text{-}partition\text{-}constraints}$ in line 9.
This would associate with $o_1$ the partition constraint
$\phi_{o_1} \equiv (5.1 < t_o \leq 8) \lor (8 < t_o \leq 14) \lor (14 < t_o \leq 18)$
and with $o_2$ the 
partition constraint $\phi_{o_2} \equiv (4 < t_o \leq 5) \lor (5 < t_o \leq 5.1)$
--- taking into account the 0 partitions and the 1D nature of this example, 
we can further simplify
$\phi_{o_1} \equiv (t_o > 5.1)$ and $\phi_{o_2} \equiv (t_o \leq 5.1)$.

Given these relevant observation partitons, our final task in lines
10-12 is to compute the probabilities of each observation partition
$\phi_{o_k}$.  This is simply done by marginalizing over the
observation function $p(\mathcal{O}^h|\xdsp,a)$ within each region
defined by $\phi_{o_k}$ (achieved by multiplying by an indicator
function $\mathbb{I}[\phi_{o_k}]$ over these constraints).  To better understand 
what is computed here, we can compute
the probability $p(o_k|\vec{b}_i,a)$ of each observation for a 
particular belief, calculated as follows:
{\footnotesize
\vspace{-1mm}
\begin{equation}
p(o_k|\vec{b}_i,a) := \int_{\vec{x}_{s}} \int_{\vec{x}_s'} \bigoplus_{\vec{d}_s} \bigoplus_{\vec{d}_s'} p(o_k|\xdsp,a) \otimes p(\xdsp| \xds,a) \otimes \alpha_j(\xdsp) \otimes \vec{b}_i(\xds)\; d\vec{x}_s' d\vec{x}_s
\end{equation}
\vspace{-4mm}
}

Specifically, for $\vec{b}_2$, we obtain $p(o_1|\vec{b}_2,a=\close) =
0.0127$ and $p(o_2|\vec{b}_2,a=\close) = 0.933$ as shown in
Figure~\ref{fig:beliefs} (right).

In summary, in this section we have shown how we can extend the exact
dynamic programming algorithm for the continuous state, discrete
observation POMDP setting from Section~\ref{sec:disc_obs} to compute
exact 1-step point-based backups in the continuous observation setting;
this was accomplished through the crucial insight that despite the
infinite number of observations, using Algorithm~\ref{alg:genrelobs}
we can symbolically 
derive a set of relevant observations for each belief point that
distinguish the optimal policy and hence value as graphically
illustrated in Figure~\ref{fig:beliefs} (right).  Next we present some
empirical results for 1- and 2-dimensional continuous state and 
observation spaces.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{figure}[tbp!]
%\vspace{-2mm}
%\centering
%\includegraphics[width=0.42\textwidth]{pics/nodes.pdf}\\
%\vspace{-2mm}
%\includegraphics[width=0.42\textwidth]{pics/time.pdf}
%\vspace{-2mm}
%\caption{\footnotesize space 
%and time vs. horizon.
%}
%\label{fig:timeSpace}
%\vspace{-4mm}
%\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Empirical Results}
We evaluated our continuous POMDP solution using XADDs on the
\textsc{\bf 1D-Power Plant} example and another variant of this
problem with two variables, described below.\footnote{ Full problem
specifications and Java code to reproduce these experiments are available online in
Google Code: \texttt{http://code.google.com/p/cpomdp} .}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure*}[tbp!]
\vspace{2mm}
\centering
\includegraphics[width=0.35\textwidth]{pics/time3.pdf} \hspace{15mm}
\includegraphics[width=0.35\textwidth]{pics/nodes3.pdf}
%\includegraphics[width=0.32\textwidth]{pics/alpha-vectors2.pdf}
\vspace{-1mm}
\caption{\footnotesize 
{\it (left)} time vs. horizon, and 
{\it (right)} space (total \# XADD nodes in $\alpha$-functions) vs. horizon.
%{\it (right)} number of $\alpha$-functions that would be generated before
%pruning for each horizon, belief state, and problem if the full 
%exponential cross-sum backup were used.
%{\it (right)} Number of $\alpha$-vectors vs Horizon.
%{\bf TODO:} *** \# of alpha-vectors that would be computed in an
%approximate value iteration approach.  COVER in text as well.
}
\label{fig:timeSpace}
%\vspace{-4mm}
\end{figure*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

{\bf \textsc{\bf 2D-Power Plant}:} We consider the more complex model
of the power plant similar to \cite{steam2} where the pressure inside
the water tank must be controlled to avoid mixing water into the steam
(leading to explosion of the tank).  We model an observable pressure
reading $p_o$ as a function of the underlying pressure state $p_s$.
Again we have two actions for opening and closing a pressure valve.
The $\close$ action has transition 
{\footnotesize
\begin{align}
p(p_s'|p_s,a=\close)&= \delta\left[ p_s' - 
\begin{cases}
 (p+10> 20) &: 20 \\ 
\neg (p+10> 20) &: p_s + 10 \\
\end{cases}
\right]\nonumber
\hspace{5mm} 
p(t_s'|t_s,a=\close)= \delta\left[ t_s' - (t_s +10) \right]\nonumber
\end{align}
}
and yields high reward for staying within the 
safe temperature and pressure range:
{\footnotesize
\vspace{-1mm}
\begin{align}
R(t_s,p_s,a=\close) &= 
\begin{cases}
(5 \leq p_s \leq 15)\wedge (95 \leq t_s \leq 105)&:50\\
(5 \leq p_s \leq 15)\wedge (t_s \leq 95)&: -1\\
(p_s \geq 15) &: -5\\ 
else &: -3
\end{cases}\nonumber
\end{align}
\vspace{-4mm}
}

Alternately, for the $\open$ action, the transition functions reduce
the temperature by 5 units and the pressure by 10 units as long as the
pressure stays above zero. For the $\open$ reward function, we assume
that there is always a small constant penalty (-1) since no electricity
is produced.

Observations are distributed uniformly within a region depending on
their underlying state:
{\footnotesize
\begin{align}
p(t_o|t_s') = 
\begin{cases}
(t_s + 80<t_o<t_s+ 105) &: 0.04 \\
 \neg (t_s + 80<t_o<t_s+ 105) &: 0 \\
\end{cases}\nonumber
\hspace{5mm} 
p(p_o|p_s') = 
\begin{cases}
(p_s<p_o<p_s+10) &: 0.1 \\
 \neg(p_s<p_o<p_s+10) &: 0 \\
\end{cases}\nonumber
\end{align}
}
Finally for PBVI, we define two uniform beliefs as follows: 
$\vec{b}_1: U[t_s;90,100]*U[p_s;0,10]$ and $\vec{b}_2: U[t_s;90,130]*U[p_s;10,30]$

In Figure~\ref{fig:timeSpace}, a time and space analysis of the two
versions of \textsc{\bf Power Plant} have been performed for up to 
horizon $h=6$. This experimental evaluation relies on one additional
  approximation over the PBVI approach of Algorithm~\ref{alg:vi} 
  in that it substitutes $p(\mathcal{O}^h|\vec{b},a)$
  in place of $p(\mathcal{O}^h|\xdsp,a)$ --- while this yields correct
  observation probabilities for a point-based backup at a particular
  belief state $\vec{b}$, the resulting $\alpha$-functions represent
  an approximation for other belief states.  In general, the PBVI
  framework in this paper does \emph{not} require this approximation,
  although when appropriate, using it should increase computational
  efficiency.  

Figure~\ref{fig:timeSpace} shows that the computation time required
per iteration generally increases since more complex $\alpha$-functions lead to
a larger number of observation partitions and thus a more expensive
backup operation.  While an order of magnitude more time is required
to double the number of state and observation variables, one can
see that the PBVI approach leads to a fairly constant amount of computation
time per horizon, which indicates that long horizons should be computable
for any problem for which at least one horizon can be computed in an
acceptable amount of time.

%In Figure ~\ref{fig:3D}  we present plots of the maximum $\delta$-vectors of belief $b_1$ for different iterations of the 2D problem instance. 
%Starting with the first iteration, the value is highest for the reward range ($5<p<15 \wedge 95<t<105$) and -1 or less for other places. In the fifth iteration, the value function has partitioned into more pieces, showing how higher temperatures can increase the value without considering the effect of the pressure. In the last plot, horizon $h=6$ has better tuned the value function so that higher temperatures and pressures increase the value of the maximum $\delta$-vector and also within the reward range, finer grain partitions have been formed. 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{figure*}[tbp!]
%\centering
%\includegraphics[width=0.31\textwidth]{pics/2d1-4.pdf}
%\includegraphics[width=0.31\textwidth]{pics/2d9-4.pdf}
%\includegraphics[width=0.31\textwidth]{pics/2d111-4.pdf}
%\vspace{-3mm}
%\caption{\footnotesize 
%{\it (left)}Maximum $\delta$-vector for $b_1$ and action of $close$ in first iteration; 
%{\it (center)} $\delta_{close}^5(b_1)$; 
%{\it (right)} $\delta_{close}^6(b_1)$. Finer grain partitions show that closing (or opening) the valve can occur at more exact temperatures at higher horizons. 
%}
%\label{fig:3D}
%\end{figure*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

\section{Conclusion} 
%This work has used the concept of continuous states and the SDP solution to MDPs from ~\cite{sanner_uai11} and continuous observations from the work of \cite{pascal_ijcai05}. It is exclusive in the sense that it brings together the continuous state and observation POMDPs using a symbolic approach. 
%In many applications of the POMDP framework, the state and observation space have been considered as discrete values such as ~\cite{steam2} here we avoid this general simplification and work with the real values from sensors.
%%sampling, monte carlo, particle filter, gaussians, perseus, pergeus
%
%There has been prior work on approximate solutions to POMDPs with large or continuous state spaces ~\cite{Thrun99h}, ~\cite{Perseus} and some has been extended to the continuous observation setting using methods such as sampling ~\cite{Perseus_cont}.
%In most approximate continuous state POMDP work, continuous or large discrete actions has been used. Thus we can extend the current work using \cite{zamani_aaai12} to contain continuous actions and provide exact or approximate solutions. 
%
%%\section{Conclusion}
We presented the first exact symbolic operations for \texttt{PBVI} in
an expressive subset of H-POMDPs with continuous state \emph{and} observations.
Unlike related work that has extended to the continuous state and
observation setting~\cite{Perseus_cont}, we do not approach the
problem by sampling.  Rather, following~\cite{pascal_ijcai05}, the key
contribution of this work was to define a discrete set of observation
partitions on the multivariate continuous observation space via
symbolic maximization techniques and derive the related probabilities
using symbolic integration.  An important avenue for future work is to
extend these techniques to the case of
continuous state, observation, \emph{and} action H-POMDPs.

{\small

\subsubsection*{Acknowledgments}

NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the ARC through the ICT Centre of Excellence program. This work was supported by the Fraunhofer ATTRACT fellowship STREAM and by the EC, FP7-248258-First-MM.
}

%\subsubsection*{References} 
\bibliography{dcpomdp}
\bibliographystyle{plain}

\end{document}
