\documentclass[letterpaper]{article}
\usepackage{aaai}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{times}
\usepackage{url}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{multirow}


%% Farbige Kommentar-Felder
\usepackage[usenames]{color} % Only used in comment commands
\definecolor{Blue}{rgb}{0,0.16,0.90}
\definecolor{Red}{rgb}{0.90,0.16,0}
\definecolor{DarkBlue}{rgb}{0,0.08,0.45}
\definecolor{ChangedColor}{rgb}{0.9,0.08,0}
\definecolor{CommentColor}{rgb}{0.2,0.8,0.2}
\definecolor{ToDoColor}{rgb}{0.1,0.2,1}


\newcommand{\hootan}[1]{\textbf{\color{CommentColor} /* #1  (hootan)*/}}
\newcommand{\martin}[1]{\textbf{\color{CommentColor} /* #1  (martin)*/}}
\newcommand{\commentout}[1]{}

\newcommand{\newrandomwalk}{RWM}

\begin{document}

\title{Performance of Blind Search Methods in Planning Benchmarks:
A Baseline Study}

\author{Hootan Nakhost\\
University of Alberta\\
Edmonton, Canada\\
nakhost@ualberta.ca
\And 
Martin M\"uller\\
University of Alberta\\
Edmonton, Canada\\
mmueller@ualberta.ca
}


\maketitle
\begin{abstract}
Much research in automated planning strives to develop either
better domain-independent heuristics, or better search algorithms.
One popular measure of progress are benchmarks, such as the ones
used at the series of International Planning Competitions (IPC).
But what does success at IPC tell us about progress in planning
techniques in general? One missing piece of the puzzle is the baseline
performance of ``dumb'' search methods on those test domains.
This paper investigates the performance of three blind search
methods on IPC benchmarks: breadth first search, random walks 
and a new twist on random walks, which adds
a basic hash table to remember previously visited states.

It is shown that 1137 out of 2112 IPC
problems can be solved by at least one of these blind search methods.
Some implications of these findings are discussed.
\end{abstract}

\section{Introduction}
\label{sec:intro}

Motivation:
How hard are current planning benchmarks when viewed purely as search problems?

What distinguishes planning from other areas of heuristic search? Partial answer: 
a structure that can be exploited by smart heuristics. Structure given by expressing problem
in a planning language such as PDDL.

\martin{look up what people such as Nau or Russell or wikipedia say. Some quotes?}

What makes a domain a good test domain for planning?

We know lots about how ``smart'', state of the art planners perform on IPC
problems. But how smart are they really? What is missing is a 
baseline of ``dumb'' planners. This will provide one measure of progress.
One goal is to quantify the difference between ``smart'' and ``dumb'' planners.
To discuss: are domains the better suited as planning benchmarks 
the larger this difference is?

\martin{If we feel nasty, maybe mention the results of the 
IPC 2008 learning competition, where most planners did better with ``smart''
learning turned off...}

Claim for discussion: a planning problem is \textit{flawed} if 
it can be solved by blind search.
\martin{still looking for a better term than flawed}

Study whether current IPC benchmarks are flawed.

For discussion: what is really flawed, the planning domain or the planners?
What do these domains
tell us about the limitations of current planners? The domain may still
have interesting structure, but these planners are not able to exploit it.
So it is important to recognize these domains and study them from the angle of
improvement (or lack of) from the baseline.

Claim: all current planners are sometimes dumb. E.g. plateaus, misleading heuristics,
large local minima, undetected deadlocks.

\subsection{Blind Search methods}

Greedy best First Search LAMA \cite{LAMA},
with blind heuristic: 0 in goal state, 1 in any other state.

Monte Carlo Random Walks \cite{Nakhost2009a}.
Arvand uses length-limited RW and evaluates endpoints by heuristic.

Blind Random Walk:
Recognize goal state and stop there. Otherwise, keep moving to
state chosen
uniformly at random among all successor states.
Restart with probability $p$.

Blind Random Walk with memory:
like above, but add a hash table, and avoid re-visiting the same state on a RW.
If run out of memory, simply continue RW and stop changing the table.
Replacement scheme?

Also, back up dead ends.

Implemented in the Fast Downward framework \cite{FD}.

\section{Random Walk with Memory}
A key problem with RWs is that in domains with dead-ends
they can waste a lot of time in dead-ends by keep returning to
the same region. \newrandomwalk\ is a technique that tries
to remedy this problem by using memory. Algorithm \ref{alg:RWM} 
gives the pseudo code. At each step in the RW all the successors
are generated and if a state has been generated before it is pruned. 
Then, the next state is randomly chosen from the remaining successors. 
If a dead-end is encountered the state is removed from the parent's 
successors. When the memory limit is hit the RWs will be run without
storing any other state in the memory. 

\begin{algorithm}[h!]
\caption{\newrandomwalk}
\textbf{Input} Initial State $s_0$, heuristic $h$, goal condition $G$\\
\textbf{Output} A solution plan 
\begin{algorithmic} 
\medskip
\STATE $n \leftarrow \mathit{Node}(s_0)$
\STATE $plan \leftarrow \langle \rangle$
\WHILE {TRUE}
\IF{$n$ is  DEAD\_END}
\STATE $\mathit{PropagateDeadEnd}(n)$
\STATE $n \leftarrow \mathit{Node}(s_0)$ \COMMENT{restart}
\ENDIF
\STATE $n' \leftarrow \mathit{expand}(n, G, S)$
\IF{$G \subseteq \mathit{State}(n)$}
\RETURN $plan$
\STATE $n \leftarrow n'$
\ENDIF
\ENDWHILE
\end{algorithmic}
\label{alg:RWM}
\end{algorithm}	

\begin{algorithm}[h!]
\caption{expand}
\textbf{Input} a Node $n$, goal condition $G$, a set of states $S$\\
\textbf{Output} a Node $n'$
\begin{algorithmic} 
\medskip
\STATE $s \leftarrow State(n)$
\IF{$n$ is EXPANDED}
\RETURN $random(children(n))$
\ENDIF
\STATE $S' \leftarrow \mathit{generateSuccssors}(s)$
\STATE $S' \leftarrow S' - S$
\STATE $n' \leftarrow \mathit{expand}(n)$
\IF{memory is not full}
\STATE $ \mathit{addChildren} (n, S')$
\STATE $S \leftarrow S \cup S'$
\ENDIF
\IF{there exists $v \in S'$ such that $G \subseteq v$}
\RETURN $\mathit{Node}(v)$
\ENDIF
\RETURN $\mathit{Node}(random(S'))$

\end{algorithmic}
\label{alg:expand}
\end{algorithm}	

\begin{algorithm}[h!]
\caption{PropagateDeadEnd}
\textbf{Input} a Node $n$\\
\begin{algorithmic} 
\medskip
\STATE $p \leftarrow \mathit{parent}(n)$
\IF{$p$ = NULL}
\RETURN
\ENDIF
\STATE $\mathit{children}(p) \leftarrow \mathit{children}(p) - n$
\IF{$\lvert  \mathit{children}(p) \rvert = 0$}
\STATE PropagateDeadEnd($n$)
\ENDIF
\RETURN
\end{algorithmic}
\label{alg:deadend}
\end{algorithm}	


\section{Experiments}
BFS was run using the FastDonward framework. 
RW and \newrandomwalk\ are implemented inside Arvand's code base. 
After some initial experiments the restarting rate for both RW \newrandomwalk\ is 
set to $0.001$. 
Tests were run on all of the IPC domains used since 1998 with the time limit of 30 minutes and
memory limit of 4GB on 2.4G dual core CPUs.

\begin{itemize}
\item Half of the problems in IPC are solved by blind search. 
\item RW based techniques do better than BFS: RW increase coverage 
by 23\%. 
\item \newrandomwalk\ improves on RW performance in domains where RW is weaker than
BFS. 
\item In several domains the top planner in IPC2011 can not bit blind search. 
\end{itemize}

\begin{table*}[htb]
\centering
%\small
\sf
\scriptsize
%\tiny
\setlength{\tabcolsep}{15.0pt}%
\begin{tabular}{
lrrr
}
%\hline
 \multirow{1}{*}{{\bf Domain}}    & \multicolumn{1}{c}{{\bf RW}}  &\multicolumn{1}{c}{{\bf \newrandomwalk}}  & \multicolumn{1}{c}{{\bf BFS}} 
%\cline{2-4}
\\
{\bf Airport}	&	32\%	&	38\%	&	42\%	\\{\bf Assembly}	&	76\%	&	80\%	&	0\%	\\{\bf Barman}	&	0\%	&	0\%	&	0\%	\\{\bf Blocks}	&	51\%	&	51\%	&	51\%	\\{\bf Cybersec}	&	36\%	&	43\%	&	0\%	\\{\bf Depot}	&	40\%	&	31\%	&	18\%	\\{\bf Driverlog}	&	55\%	&	50\%	&	35\%	\\{\bf Elevators}	&	10\%	&	13\%	&	3\%	\\{\bf Elevators}	&	0\%	&	0\%	&	0\%	\\{\bf Floortile}	&	0\%	&	0\%	&	0\%	\\{\bf Freecell}	&	100\%	&	100\%	&	18\%	\\{\bf Grid}	&	40\%	&	40\%	&	20\%	\\{\bf Gripper}	&	55\%	&	55\%	&	35\%	\\{\bf Logistics 1998}	&	11\%	&	11\%	&	5\%	\\{\bf Logistics 2000}	&	46\%	&	50\%	&	35\%	\\{\bf Miconic}	&	94\%	&	78\%	&	33\%	\\{\bf Miconic Full Adl}	&	52\%	&	52\%	&	52\%	\\{\bf Miconic Simple Adl}	&	96\%	&	82\%	&	46\%	\\{\bf Movie}	&	100\%	&	100\%	&	100\%	\\{\bf Mprime}	&	91\%	&	91\%	&	54\%	\\{\bf Mystery}	&	53\%	&	53\%	&	50\%	\\{\bf Nomystery}	&	0\%	&	10\%	&	15\%	\\{\bf Notankage}	&	92\%	&	90\%	&	28\%	\\{\bf Openstacks 2008}	&	100\%	&	100\%	&	20\%	\\{\bf Openstacks 2011}	&	100\%	&	100\%	&	0\%	\\{\bf Optical Telegraphs}	&	8\%	&	8\%	&	4\%	\\{\bf Parcprinter 2008}	&	43\%	&	40\%	&	33\%	\\{\bf Parcprinter 2011}	&	0\%	&	0\%	&	0\%	\\{\bf Parking}	&	0\%	&	0\%	&	0\%	\\{\bf Pathways}	&	16\%	&	20\%	&	13\%	\\{\bf Pegsol 2008}	&	96\%	&	96\%	&	90\%	\\{\bf Pegsol 2011}	&	90\%	&	95\%	&	85\%	\\{\bf Philosophers}	&	16\%	&	18\%	&	10\%	\\{\bf PSR Large}	&	32\%	&	30\%	&	26\%	\\{\bf PSR Small}	&	98\%	&	98\%	&	98\%	\\{\bf Rovers}	&	100\%	&	95\%	&	12\%	\\{\bf Satellite}	&	25\%	&	19\%	&	13\%	\\{\bf Scanalyzer 2008}	&	50\%	&	50\%	&	40\%	\\{\bf Scanalyzer 2011}	&	35\%	&	35\%	&	20\%	\\{\bf Schedule}	&	17\%	&	17\%	&	8\%	\\{\bf Sokoban 2008}	&	0\%	&	23\%	&	36\%	\\{\bf Sokoban 2011}	&	0\%	&	10\%	&	15\%	\\{\bf Storage}	&	66\%	&	66\%	&	46\%	\\{\bf Tankage}	&	68\%	&	57\%	&	20\%	\\{\bf Tidybot}	&	90\%	&	85\%	&	10\%	\\{\bf Tpp}	&	53\%	&	50\%	&	20\%	\\{\bf Transport 2008}	&	20\%	&	23\%	&	20\%	\\{\bf Transport 2011}	&	0\%	&	0\%	&	0\%	\\{\bf Trucks}	&	3\%	&	10\%	&	20\%	\\{\bf Visitall}	&	40\%	&	45\%	&	0\%	\\{\bf Woodworking 2008}	&	20\%	&	20\%	&	16\%	\\{\bf Woodworking 2011}	&	5\%	&	5\%	&	5\%	\\{\bf Zenotravel}	&	60\%	&	55\%	&	35\%	\\\hline{\bf Total}	&	53\%	&	51\%	&	29\%	\\
\end{tabular}

\vspace{-0.2cm}
\caption{\label{tab:IPC} Coverage in all the IPC benchmark domains.}
\vspace{-0.4cm}
\end{table*}


\section{Discussion}
What did we learn? Discuss some of the domains.

Useful to have dumb searchers for portfolio?

Future work: try other dumb searches, try adding some simple heuristics and see what happens.

\bibliographystyle{aaai}
\bibliography{blind-search}

\end{document}
