\chapter{Planners}
\label{chap:planners}

In the following we briefly describe the planners we used in our experiments and in our program. In most cases we used these tools like a black box and tested them on a single problem type, namely the one found in our game. In all cases they were accessed through two PDDL input files and their output was also parsed as a text file.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%\section{Planning type}
%
%TODO \emph{STRIPS} (Stanford Research Institute Problem Solver)~\cite{fikes1971strips}
%
%TODO why not partial observability (a player would be not needed)
%
%The problem considered in the paper falls into the realm of planning with partial observability (expressed as partial or potentially wrong, initial state information), which has been well studied by the planning community. As a starting point, see e.g. the work by Albore, Palacios and Geffner; or Bonet's papers "Conformant plans and beyond: Principles and complexity", "Bounded Branching and Modalities in Non-Deterministic Planning" and "Deterministic POMDPs Revisited".
%TODO - par eve IPC-n teszteltek STRIPS tervezo valtozo kornyezetben jobban teljesitett, mint a contingency tervezok, mert egyszerubb, minden valtozasnal siman ujratervezett. minel nagyobb problema, annal inkabb elonybe kerult


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Requirements}

As the way to communicate with the used planning systems we choose \emph{PDDL 2.2}~\cite{edelkamp2003pddl} (Planning Domain Definition Language). Writing to a file and passing it to a different program is slower than direct memory sharing, but choosing differently would greatly limit our alternatives. We were looking for tools supporting the PDDL requirements \emph{STRIPS} and \emph{typing}.

STRIPS (Stanford Research Institute Problem Solver) type domain definition that is the most basic and widely implemented notation of PDDL.

Typing enabled us to group PDDL objects into types to improve both readability and decrease the number of required predicates.

In several cases, like in generating the required number of problem situations it would be useful to be able to use numerical quantities, but in order to keep the planning streamlined we constructed our domain without it.

Unfortunately, we found that most of the planners available to us, that also supported our requirements were only available on UNIX-type systems. For this reason the tests we conducted happened in Linux.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Domain and problem sizes}

Except for the early development of the stages of the program we used a single unified planning domain -- agent.pddl -- that can be found in the appendix as figure~\ref{fig:agent.pddl}.

As mentioned above, it uses two requirements: STRIPS and typing. It has 20 types, 12 predicates and 10 actions.

While our program is running we are using two problem files -- burglar.pddl to describe problem definitions for burglar typed agents and guard.pddl to contain problems for guard agents. These files are updated as required to reflect the game situation in which we need a new plan. An average planning problem has around 70 PDDL objects and 170 initial facts. The number of goals depends on the agent type. In case of a burglar it is always 3, for a guard it's usually about 4, in level creation it depends on the number of required traps.

\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Tested planners}

The planners we have tested on our problem domain so far are the following:
\begin{itemize}
  \item \emph{Blackbox}~\cite{ijcai99blackbox}
  \item \emph{FF}~\cite{Hoffmann2001ff} (Fast-Forward)
  \item \emph{HSP}~\cite{Bonnet98hsp} (Heuristic Search Planner)
%  \item \emph{Londex}~\cite{2007londex} (Long-Distance Mutual Exclusion)
  \item \emph{LPG}~\cite{2006lpg} (Local search for Planning Graphs)
  \item \emph{LPRPG}~\cite{2008lprpg} (Linear Program Relaxed Planning Graph)
  \item \emph{Marvin}~\cite{2007marvin}
  \item \emph{MaxPlan}~\cite{2006maxplan}
  \item \emph{Metric-FF}~\cite{2003metricff}
  \item \emph{MIPS-XXL}~\cite{2008mipsxxl} (Model Checking Integrated Planning System)
  \item \emph{SGPlan}~\cite{chen2004sgplan} (Subgoal Partitioning and Resolution in Planning)
\end{itemize}

\subsection{Blackbox}

A planning system based on converting problems specified in STRIPS notation into Boolean satisfiability problems and solving them with a variety of satisfiability engines. 

It's available in both Windows and Linux.

In our tests we used Linux version 43.

The planner failed to parse the domain due to actions containing ``or'' precondition. After breaking the problematic actions into multiple ones, the tool terminated on our simplest test problem with no solution in 19 milliseconds.

\subsection{FF}

It is a forward chaining heuristic state space planner. It can handle classical STRIPS- as well as full scale ADL planning tasks.

In our tests we used version 2.3.

The planner was capable of solving our problems. It proved to be the fastest on our tests.

\subsection{HSP}

It is a planner based on the ideas of heuristic search. It uses forward search with heuristics and plan length estimates.

We used version 2.0 for Linux.

The program reported that it had found no solution to our test problems.

%\subsection{Londex}
%
%A long-distance mutual exclusion can capture constraints over actions and facts not only at the same planning time step but also across multiple time steps. This provides a general tool to improve planning efficiency. As an application, Londex is incorporated in SATPLAN 2004.
%
%The program failed to produce a valid solution of our test problems.

\subsection{LPG}

It is a planning system based on local search and planning graphs, capable of solving numerical quantities and durations.

In our tests we used the \emph{LPG-td} version.

The planner managed to process both the domain and the problem files, however it failed to terminate on our simplest test problem by itself in any reasonable time (10 minutes), constantly reporting: 
\begin{verbatim}
".......... search limit exceeded. Restart."
\end{verbatim}

\subsection{LPRPG}

At is a heuristic forward search planner similar to the Metric-FF, but with different handling of numbers. Numeric effects of actions are translated into constraints in a linear program while constructing the reachability graph.

In our tests we used the version seen on the 2011 International Planning Competition (IPC-2011). 

The tool reported a type error, most possibly LPRPG does not support entities having multiple types simultaneously.
 
\subsection{Marvin}

It is a planner with action-sequence-memoization (avoiding repeated calculations) to generate macro-actions, which are then used during the search for a solution.

In our case it failed to produce a valid solution to our test problems.

\subsection{MaxPlan}

A SAT solving planner that decomposes the original problem into a series of subproblems.

The tool failed to parse our domain file. It terminated with the following output:
\begin{verbatim}
"unknown type 'T_AGENT'"
\end{verbatim}

\subsection{MetricFF}

It is an extension of the FF planner to numerical state variables. It is PDDL~2.1 compatible.

We tested the Linux version that successfully solved our test problems. In the game itself there is a working MetricFF interface.

\subsection{MIPS-XXL}

It is a planner with PDDL~3 support. It supports the exploration of state spaces that are much larger than the available RAM.

We tested the 2006, and the 2008 versions, both versions managed to solve our test problems.

\subsection{SGPlan}
\label{subs:SGPlan}

SGPlan is a planning system utilizing subgoal partitioning and resolution. This means splitting a large planning problem into subproblems, each with its own subgoal, and resolving inconsistent solutions of subgoals using extended saddle-point conditions~\cite{chen2004sgplan}.

The program is PDDL~3 compatible. On the downside its codebase is not opened and the program is only available to UNIX-type systems.

We tested both version 5.22, and 6. They were the most prevailed planners through the early development, so in the later stages SGPlan became the one, that our program uses by default. One of the useful features of the SGPlan implementation is not giving up ``too quickly'' on complicated problems. This however also means that sometimes through the gameplay we had to terminate the planning process for the sake of the player.

We have found that the PDDL processing in SGPlan has some unusual characteristics. Defining the existence of \emph{action description language} (ADL), and \emph{Preferences} (soft goal definition) requirements helps to find shorter plans without ever using the required abilities in the domain or the problem file. A less important fact is that the PDDL parser seems to be less strict, than its most counterparts. It allows the existence of undefined types in predicates, if those predicates are never used in actions or in problem definitions.

\section{Comparison}
\label{sec:comparison}

\subsection{Test conditions}

We have gradually done multiple tests through the development as our domain changed, however these tests no longer correspond with the current state of the program, so here we will describe ones done with the final domain version.

At first we took a level from our slightly larger ones (map-complex.xml) that has 28 rooms, 32 doors, a single agent with full knowledge of the environment and repeatedly tested a full burglar planning problem on it. The level layout and the plan generated by the default planner can be found at the end of the thesis in figure~\ref{fig:test1}. Translated to a PDDL burglar problem the level contained 74 PDDL objects, 167 initial facts and 3 goals. In the following we will \emph{Test~1} and list the results in table~\ref{tab:test1}.

In the next series of test we used the same level as previously, but without an important room (room 15) in the domain, so the agent needed to use a longer path. The required alternative path generated by the default planner can be found at the end of the thesis in figure~\ref{fig:test2}. The simplified problem contained 71 PDDL objects, 157 initial facts and 3 goals. The test results can be seen in table~\ref{tab:test2} for \emph{Test~2}.

The third test series was conducted on the same level as the previous ones, but instead of completely removing a choke-point room (room 15) as in the above series, we only defined a negative goal (like one we often use in the game) requiring the agent not to visit the selected room. The room to avoid is shown in figure~\ref{fig:test3}. The third problem had 74 PDDL objects, 167 initial facts and 4 goals. We will call describe the results as \emph{Test~3}.

In the fourth series we used a level that looked like a 10x10 chessboard (map-chess-board.xml). The interesting property of this level was its size. It is the largest layout, so far where we managed to operate our agents. It contains 101 rooms and 181 doors. Its layout is shown in figure~\ref{fig:test4} The last test problem had 290 PDDL objects, 912 initial facts and 3 goals. This test series we will name \emph{Test~4} and list the results in table~\ref{tab:test4}.

Each planner was run 20 times repeatedly by a script on a given problem, we measured the number generated actions and the execution time.

The test environment was a Ubuntu 11.10 (64 bit version)Linux in a virtual machine (VirtualBox) with the virtual parameters of 2 GB of memory and a single Intel Core i7 CPU X 920 2 GHz.

\subsection{Results}

Legend to the tables:
\begin{itemize}
  \item \emph{Avg. Time} -- average processing time in seconds.
  \item \emph{Std. Deviation} -- standard deviation of processing times as the percentage of the average processing length.
  \item \emph{Actions} -- number of resulted actions
\end{itemize}

\begin{table}[H]
  \begin{center}
    \begin{tabular}{|l|r|r|r|}
      \hline
      \multicolumn{4}{|c|}{\bf{Test 1}}\\\hline
      \hline
      \bf{Planner} & \bf{Avg. Time} & \bf{Std. Deviation} & \bf{Actions}\\\hline
      \hline
      FF 2.3        & 0.052 s & 12.916 \% & 78\\\hline
      Metric-FF     & 0.143 s &  5.931 \% & 78\\\hline
      SGPlan 522    & 0.173 s &  3.560 \% & 78\\\hline
      SGPlan 6      & 0.207 s & 12.418 \% & 78\\\hline
      MIPS-XXL 2008 & 0.301 s &  6.208 \% & 82\\\hline
      MIPS-XXL      & 0.455 s &  7.046 \% & 78\\\hline
      Blackbox      & --      & --        & --\\\hline
      HSP           & --      & --        & --\\\hline
      %Londex        & --      & --        & --\\\hline
      LPG           & --      & --        & --\\\hline
      LPRPG         & --      & --        & --\\\hline
      Marvin        & --      & --        & --\\\hline
      MaxPlan       & --      & --        & --\\\hline
    \end{tabular}
  \end{center}
  \caption{Planner results of test series 1}
  \label{tab:test1}
\end{table}

In our first test series, listed in table~\ref{tab:test1} an interesting fact appeared: the simple test problem the MIPS-XXL 2008 version generated a solution that was four steps, longer than the plan done by any other systems.

Fast-Forward implementations seam to dominate our problem domain.

\begin{table}[H]
  \begin{center}
    \begin{tabular}{|l|r|r|r|}
      \hline
      \multicolumn{4}{|c|}{\bf{Test 2}}\\\hline
      \hline
      \bf{Planner} & \bf{Avg. Time} & \bf{Std. Deviation} & \bf{Actions}\\\hline
      \hline
      FF 2.3        & 0.069 s & 25.741 \% & 98\\\hline
      SGPlan 522    & 0.162 s & 24.285 \% & 98\\\hline
      Metric-FF     & 0.214 s & 21.069 \% & 98\\\hline
      SGPlan 6      & 0.277 s & 15.784 \% & 98\\\hline
      MIPS-XXL 2008 & 0.599 s &  9.929 \% & 98\\\hline
      MIPS-XXL      & 0.817 s & 23.044 \% & 98\\\hline
      Blackbox      & --      & --        & --\\\hline
      HSP           & --      & --        & --\\\hline
      %Londex        & --      & --        & --\\\hline
      LPG           & --      & --        & --\\\hline
      LPRPG         & --      & --        & --\\\hline
      Marvin        & --      & --        & --\\\hline
      MaxPlan       & --      & --        & --\\\hline
    \end{tabular}
  \end{center}
  \caption{Planner results of test series 2}
  \label{tab:test2}
\end{table}

As it can be seen from table~\ref{tab:test2} in the second test with the exception of the MIPS-XXL planners, the planning systems showed no significant increase in execution times.

All our planners failed in the third test (Test~3). With the exception MIPS-XXL 2008 version no system produced any positive result within 6 minutes. However, not even MIPS-XXL was capable to generate a truly valid plan; it ignored one of our goal conditions and in the returned action sequence let the the burglar be seen (not observed fact in the PDDL got satisfied).  

\begin{table}[H]
  \begin{center}
    \begin{tabular}{|l|r|r|r|}
      \hline
      \multicolumn{4}{|c|}{\bf{Test 4}}\\\hline
      \hline
      \bf{Planner} & \bf{Avg. Time} & \bf{Std. Deviation} & \bf{Actions}\\\hline
      \hline
      FF 2.3        &  1.446 s & 6.052 \% & 96\\\hline
      SGPlan 522    &  7.085 s & 2.330 \% & 96\\\hline
      Metric-FF     &  7.848 s & 5.162 \% & 98\\\hline
      SGPlan 6      &  9.413 s & 2.837 \% & 98\\\hline
      MIPS-XXL      & 49.170 s & 5.078 \% & 96\\\hline
      MIPS-XXL 2008 & 56.178 s & 8.271 \% & 96\\\hline
      Blackbox      & --       & --       & --\\\hline
      HSP           & --       & --       & --\\\hline
      %Londex        & --       & --       & --\\\hline
      LPG           & --       & --       & --\\\hline
      LPRPG         & --       & --       & --\\\hline
      Marvin        & --       & --       & --\\\hline
      MaxPlan       & --       & --       & --\\\hline
    \end{tabular}
  \end{center}
  \caption{Planner results of test series 4}
  \label{tab:test4}
\end{table}

The fourth test series, listed in table~\ref{tab:test4} was accomplished on a level that was built to test the limits of our game. Idea behind creating this level was to find the borders of our default planner (SGPlan 5.22), where it was yet able to operate. As it is visible from the table, the extensive number of predicates significantly slowed down all planners. It had the most visible impact on our slowest planners, the MIPS-XXL pair.
It's worth to note that SGPlan 5.22 is barely inside our arbitrarily selected planning time limit of 8 s, that we use in the game. Any further increase in the number of predicates may render planning instable on this level using the default planner.

We omitted from table~\ref{tab:test4}, but it's an interesting fact that without the above mentioned requirements (section~\ref{subs:SGPlan}) SGPlan version 5.22 produces a path with the length of 98 actions. Version 6 has no such behavior. It produces plans of length 98 consistently.

\section{Conclusion}

Using the above tests we discovered that there is an important difference between not fulfilled preconditions and negative goals -- translated to our specific case, between a locked door (no fulfilled precondition) and a trapped room (negative goal requirement). For a human it seams obvious to discover that in our domain there is no way to negate an already set ``observed'' predicate, if the final goals require the falseness of such a predicate. In these cases the plan should never use an action containing such a result. Finding irrefutable effects is not even a hard algorithmic task, however the creators of our tested planners did not implement it. We had to be aware of this curious fact while developing our game.

Another important point clearly visible from our tests is that the tested planning systems are not producing an optimal plan (based on the number of actions). Each one of them works in a bit different way and we have to be aware which one of them we used in design time and in online-planning.

From the tests we also conclude that algorithms implementing heuristic best-first search are the most fitting to our problem type.

