\section{Introduction}
\label{sec:intro}

Imagine an Autonomous Underwater Vehicle (Fig. \ref{fig:auv-fig})
loaded with science instruments, and capable of navigating effectively
to great depths, and at low altitudes. Such a vehicle makes for an
intrepid Ocean Explorer. Exploration often implies a vehicle-operator
will not know a-priori what the AUV will find. For AUV’s able to adapt
missions in-situ based on observations, it becomes harder still to
predict a-priori what goals are to be accomplished and ensure
feasibility. In the event that not all goals are possible, simply
quitting when the vehicle reaches a time and/or energy limit may yield
very poor performance, since goals of higher priority may have been
deferred till the end of the mission based on the incorrect assumption
that there would be time and energy aplenty. A greedy approach that
simply tackles the highest priority outstanding goal next can also
yield very poor performance since the resulting path traversed may be
much more expensive than an optimal path taking into account all goals
together.  So the core problem we address is one of selecting a subset
of goals to tackle, and planning a path to tackle them.


\cite{Smith04} studied this kind of problem in the context of Mars
Planetary Rovers. That work observed the strong similarity between
over-subscribed planning and a variant of the traveling salesman
problem called an orienteering problem. An orienteering problem is
concerned with finding a path through a given set of control points to
maximize a reward within a fixed cost bound. \cite{Smith04} noted that
AI planning systems are generally not designed to deal with
oversubscribed problems since their goal structure is conjunctive
(i.e. all or nothing) rather than disjunctive (i.e. goals can be
dropped if infeasible). They addressed this deficiency by first
solving the abstracted orienteering problem, and then using the
solution to this relaxed problem to feed goals to the planner in a
prudent order, and guide the search process. Our work exploits the key
ideas underpinning this approach and applies them to a planning
scenario on-board an AUV. A plan is a collection of actions used by an
onboard executive to dispatch to a lower level functional layer, which
in turn actuates subsystems on the AUV. Planning is done incrementally
while interleaved with execution in the context of state estimates
generated by the functional layer (see \cite{mcgann08a} for
details). We use path-planning in a distance graph to compute cost
estimates for candidate solutions. We then use local-search to solve a
relaxed form of the orienteering problem, exploiting these
estimates. The solution to this relaxed problem provides a
goal-ordering heuristic for refinement search in a
partial-order-planner. The planner is embedded as a specialized
deliberative component within an intelligent Teleo-Reactive Executive
\cite{mcgann08a}. This ties the plan and planning to real world state
throughout mission execution. Goals may be input at the start of a
mission and/or discovered during mission execution based on
observations of interest.  Initial plans may be feasible but can break
as execution unfolds, requiring a new plan to be generated. 

\begin{figure}[tr]
  \centering \vskip-5pt
  \includegraphics[scale=0.075]{fig/MBARI-AUV.jpg}
  \caption{\small The MBARI \emph{Dorado} AUV on its support vessel
    the R/V \emph{Zephyr}.}
  \label{fig:auv-fig}
\end{figure}


The paper
is laid out as follows.  Section II describes the problem in more
detail in the context of AUV science missions. Section III describes
the design and implementation of the solver. Section IV briefly
outlines the integration of the planner in execution. Section V
describes some preliminary results at sea. We conclude with a
discussion of related work and some limitations of the current work to
be addressed.
