\documentclass[letterpaper]{article}
\usepackage{aaai}
\usepackage{times}
\usepackage{helvet}
\usepackage{courier}
\usepackage{graphicx}
\usepackage{listings}
\usepackage[ruled,vlined]{algorithm2e}
\usepackage{multirow}
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% PDFMARK for TeX and GhostScript
% Uncomment and complete the following for metadata if
% your paper is typeset using TeX and GhostScript (e.g
% if you use .ps or .eps files in your paper):
% \special{! /pdfmark where
% {pop} {userdict /pdfmark /cleartomark load put} ifelse
% [ /Author (John Doe, Jane Doe)
% /Title (Paper Title)
% /Keywords (AAAI, artificial intelligence)
% /DOCINFO pdfmark}
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% PDFINFO for PDFTeX
% Uncomment and complete the following for metadata if
% your paper is typeset using PDFTeX
% \pdfinfo{
% /Title (Input Your Title Here)
% /Subject (Input The Proceedings Title Here)
% /Author (First Name, Last Name;
% First Name, Last Name;
% First Name, Last Name;)
% }
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Uncomment only if you need to use section numbers
% and change the 0 to a 1 or 2
% \setcounter{secnumdepth}{0}
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{Best-First Search with Lookaheads}
%\author{Tamar Kulberis \and Ariel Felner \\ tamarkulberis@gmail.com \\ felner@bgu.ac.il \And
%Roni Stern \\ roni.stern@gmail.com}

\author{Tamar Kulberis\\
        Information Systems Engineering\\
       Ben Gurion University\\
       Beer-Sheva, Israel\\
       kulberis@bgu.ac.il\\
\And
       Roni Stern\\
       Information Systems Engineering\\
        Ben Gurion University\\
       Beer-Sheva, Israel\\
       roni.stern@gmail.com\\
\And
       Ariel felner\\
       Information Systems Engineering\\
        Ben Gurion University\\
       Beer-Sheva, Israel\\
       felner@bgu.ac.il\\
}

\begin{document}
\nocopyright % Removes AAAI copyright text
\linesnumbered
\maketitle

\begin{abstract}

Best-First Search (BFS) is a classic general search technique. It
maintains an open-list of generated nodes and expands the least-cost node
from it while adding its unvisited neighbors to the open-list. The main
limitation of BFS is that it stores all the states it visits in memory.
BFS is composed by a number of primitive steps and can be implemented in
many ways. In this paper we deepen into implementation details of BFS by
analyzing the standard primitive steps and their execution order. In
particular, we study the roles of the {\em goal test} and of {\em
duplicate detection} primitives. Based on this study we introduce a new
BFS variation called BFS with Lookahead (BFSL). The basic idea of BFSL is
to to perform limited DFS lookaheads from the frontier of of the BSF
(open-list). We show that this algorithm requires significantly less
memory. In addition, a time speedup is also achieved when choosing the
lookahead depth correctly. Experimental results on several domains,
demonstrate the benefits of all our ideas\footnote{Tamar is an M.Sc
student and this paper summarizes her current achievements. This work was
never submitted before.}



%Finally, several additional BFS enhancements are described as future work.

\end{abstract}

\section{Introduction}





Best-first search (BFS) is a well-known general purpose search algorithm.
It keeps a {\em closed list} (denoted hereafter as CLOSED) of nodes that
have been expanded, and an {\em open list} (denoted hereafter as OPEN) of
nodes that have been generated but not yet expanded. At each cycle of the
algorithm, it expands the most promising node (the {\em best}) from OPEN.
When a node is expanded it is moved from OPEN to CLOSED, and its children
are generated and added to OPEN. The search terminates when a goal node is
chosen for expansion, or when OPEN is empty.
Figure~\ref{fig:bfsBookAlgorithm} shows the pseudo code of a general BFS
of the common AI text book~\cite{russel2003modern}.

\begin{figure}[tb]
\centering
\includegraphics[width=8cm, height=6cm]{bfsBookAlgorithm.eps}
\caption{Standard BFS from \cite{russel2003modern}}
\label{fig:bfsBookAlgorithm}\vspace{-0.5cm}
\end{figure}

Many known algorithms are special cases of BFS  differing only in their
cost function. If the cost of a node is its depth in the tree, then BFS
becomes breadth-first search (denoted here as BRFS), expanding all nodes
at a given depth before any nodes at any greater depth.   If the edges in
the graph have different costs, then taking \(g(n)\), the sum of the edge
costs from the start to node \(n\) as the cost function, yields Dijkstra's
algorithm.  If the cost is \(f(n)=g(n)+h(n),\) where \(h(n)\) is a
heuristic estimation of the cost from node \(n\) to a goal, then BFS
becomes the A* algorithm. BFS is complete, and in the case of A* with an
admissible heuristic (i.e., never overestimates the actual cost from node
\(n\)) returns optimal solutions and is optimally
effective~\cite{dechter1985generalizedBestFirst}.

%Special cases of best-first search include breadth-first search (labeled BRFS in this paper), Dijkstra's single-source shortest-path algorithm\cite{dijkstra1959aNoteOn},  and the A* algorithm\cite{hart1968aFormalBasis}, differing only in their cost functions \(f(n)\).  If the cost of a node is its depth in the tree, then best-first search becomes breadth-first search, expanding all nodes at a given depth before any nodes at any greater depth.  If the edges in the graph have different costs, then taking \(g(n)\), the sum of the edge costs from the start to node \(n\) as the cost function, yields Dijkstra's algorithm.  If the cost is \(f(n)=g(n)+h(n),\) where \(h(n)\) is a heuristic estimation of the cost from node \(n\) to a goal, then best-first search becomes the A* algorithm. If \(h(n)\) is {\em admissible}, i.e., never overestimates the actual cost from node \(n\) to a goal, then A* is guaranteed to return an optimal solution, if one exists. BFS is complete, and in the case of A* returns optimal solutions, and is optimally effective\cite{dechter1985generalizedBestFirst} given an admissible heuristic.

The main drawback of BFS is its memory requirements. BFS stores in memory
all the {\em open} and {\em closed} nodes in order to recognize a state
that has already been generated and to enable solution reconstruction once
a goal is reached. The space complexity of BFS therefore grows
exponentially with the depth of the search. Consequently, BFS cannot solve
difficult problems since for large state spaces it usually exhausts all
the available memory before reaching a goal. In addition, the constant
time per node in a typical expansion cycle is also rather heavy due to the
different primitive operation that are performed.

By contrast, the memory needs of depth-first search (DFS) is only linear
in the depth of the search and the constant time per node is rather light.
DFS in its basic form may never fin a solution and if it does, it does not
guarantee anything about the quality of the solution it finds. However,
the memory limitation of BFS has been addressed in the past and several
algorithms that generate nodes in a best-first fashion while activating
different versions of DFS have been developed. The most common algorithm
is Iterative-Deepening A* (IDA*)~\cite{korf1985depthFirstIterative} which
performs a series of DFS calls from the root in an increasing order of
costs. Another known linear-space algorithm is recursive best-first search
(RBFS)\cite{korf1993linearSpaceBest}. It expands new nodes in a best-first
order even when the cost function is {\em nonmonotonic}. Since these
algorithms are DFS oriented their efficiency significantly deteriorates in
state spaces that contain many cycles or a very large distribution of
heuristic values.

Other attempts to search in a best-first order but with a limited amount
of memory include MREC\cite{sen1989fastRecursiveFormulation},
MA*\cite{chakrabarti1989heuristicSearchIn} and
SMA*\cite{russell1992efficientMemoryBounded}. Even though these algorithms
generate fewer nodes than IDA* they run slower in practice because of
memory maintenance overhead (see \cite{korf1993linearSpaceBest}).

In this paper we explore approaches to enhance BFS performance. First, we
study the implementation details of BFS. BFS includes many different
primitive steps such as {\em duplicate detection}, {\em goal test}, {\em
insertion} and {\em deletion} from different data structures (e.g., OPEN
and CLOSED). While the basic mechanism of BFS is well-known, we show that
standard textbook implementation should be revisited. We study different
implementations and discuss the pros and cons of each with regards to the
time and memory needs.

Second, based on this study, we introduce a new algorithm called {\em BFS
with Lookahead} (BFSL). The basic idea is to use the general schema of BFS
but to perform limited DFS lookaheads from nodes when they are expanded.
BFSL has great potential to exploit the complimentary benefits of BFS and
DFS. Using these lookaheads, significant reduction in the memory
requirements is always obtained. Furthermore, we show that when correctly
choosing the lookahead depth, we can control the problem of duplicates and
cycles which occur in DFS and thus significant time speedup can be also
obtained in many cases.
%We show that the exact choice of the lookahead depth is domain dependent and is greatly correlated with the rate of small cycles.
%[[AF: do we really show this. If not, we need to better rephrase.]]
Experimental results on several domains, demonstrate the benefits of all
our ideas.


The next sections discuss several primitive steps of BFS and study some
aspects of their implementation details. Then the new algorithm, BFS with
lookahead (BFSL), will be introduced.


\section{Duplicate Detection} \label{sec:duplicateDetection}


\begin{algorithm}[tb]
\SetLine \KwIn{$n$, the \emph{best} node in OPEN}
\uIf{goalTest($n$)=True}{
    halt \tcc*[l]{Goal was found}
}
insert $n$ to CLOSED \\
\ForEach{state operator $op$}{
    $child \leftarrow$ generateNode($op,n$) \\
    \eIf{duplicateDetection($child$)=False}{
        insert $child$ to OPEN
    }{ %Else
        update cost function of $duplicate$ (if required)
    }
}
\caption{Basic BFS node expansion cycle}
\label{alg:simpleBFS}
\end{algorithm}

Algorithm \ref{alg:simpleBFS} presents the basic steps of the expansion
cycle of the least cost node from OPEN. A number of steps are performed on
the node just extracted from OPEN. First, a goal test is performed. Then,
all its children are generated. For each child $c$, OPEN and CLOSED are
checked to see whether they already contain duplicates of $c$. This step
is called {\em Duplicate Detection} (DD). If no duplicate was found, $c$
is inserted to OPEN. In order to perform an efficient DD both OPEN and
CLOSED are usually maintained in a hash table. In addition, OPEN should
also be maintained as a priority queue (usually implemented intermixed
with the hash table) in order to allow the extracted of the least-cost
node.

\subsection{Change and Undo Vs. Copy and Discard}

Generation of a child usually involves two atomic steps.
\begin{itemize}
\item {\bf COPY:} copying the parent node into a new data structure.
\item {\bf CHANGE:} applying the selected operator
(e.g. a moving of a tile in the tile puzzle) on the new child, causing a change in its description.
\end{itemize}

The relative time required for each of these steps is
domain/implementation dependent. However, it is possible to evaluate a
node without completely generating it. This can be done by modifying its
parent node, evaluating it, and undoing the modification. This observation
allows the two following implementation variants of generating a node
(lines 4-10) in the basic BFS described in Algorithm~\ref{alg:simpleBFS}.

\begin{itemize}
\item {\bf Copy and Discard Duplicate Detection (CDD):} In this version
a node is generated by first copying the parent and then applying the
operator. The DD operation is performed on the newly generated node.

\item {\bf Change and Undo Duplicate Detection (UDD):}
Here, we first CHANGE the parent and perform the DD test. Only if DD
returns FALSE we COPY this node a new state item and insert it to OPEN.
Then, we perform an UNDO operation to get back the parent node.

\end{itemize}

\begin{table}[tb]
\begin{tabular}{|c|c|c|}
    \hline
$Duplicate Detection$ &  $CDD$    &   $UDD$  \\
    \hline
\multirow{5}{*}{False} & COPY & CHANGE \\
& CHANGE & DD? \\
& DD? & COPY \\
& INSERT & INSERT \\
& - & UNDO \\
\hline
\multirow{3}{*}{True} & COPY & CHANGE \\
& CHANGE & DD?\\
& DD?& UNDO \\
    \hline
\end{tabular}
\caption{CDD Vs. UDD}\label{tab:CDDandPDDbehavior}
\end{table}

Table \ref{tab:CDDandPDDbehavior} shows the different operations taken by
both CDD and UDD based on the result of the DD test. The following
primitive steps are used in the table. {\bf Duplicate-detection} (DD) is
the matching a node against OPEN and CLOSED. This operation returns TRUE
or FALSE. An {\bf INSERT} operation inserts the new child to OPEN and an
{\bf UNDO} operation is the reverse of the CHANGE operation.

Choosing the more efficient variant depends on the domain and the state
representation. For domains with a large number of duplicate nodes and a
costly COPY operation, UDD will probably perform faster, while for domains
with relatively small rate of duplicate nodes and cheap COPY operation
(e.g. with a compact state representation), CDD might be preferable. In
formal, let $P_{dd}$ be the probability that a duplicate node was found
(DD returns True), and $C_{x}$ be the constant cost of performing action
$x$. Since both CDD and UDD perform exactly the same expansions, it is
enough to compare the expected constant time of an their expansion cycle
as follows.

\noindent $T(CDD)=P_{dd}\times{}(C_{change}+C_{dd}+C_{undo}) + \newline
~~~~~~~~(1-P_{dd})\times{}(C_{change}+C_{dd}+C_{copy}+C_{insert}+C_{undo})$

\noindent $ T(UDD)=P_{dd}\times{}(C_{copy}+C_{change}+C_{dd})+ \\
~~~~~~~~(1-P_{dd})\times{}(C_{copy}+C_{change}+C_{dd}+C_{insert})$

Comparing these two equations and eliminating common elements, we conclude
that the relative speed of  CDD Vs. UDD depends on the duplicate node
probability and the constant of $c_{copy}$ and $c_{undo}$. If $P_{dd}
\times C_{copy} > C_{undo}$ than CDD will be faster. Otherwise UDD will be
faster. Note that $P_{dd}$ is an attribute of the problem domain, while
$C_{copy}$  and $C_{undo}$ are attributes of state representation.


{\bf Experimental results:} We implemented both CDD and UDD on two state
representations of the 15 puzzle. In the first representation ({\em
explicit representation}), each state is represented as an array of size
16 (=number of tiles). In the second representation ({\em packed
representation}) the explicit state is {\em packed} and saved in 2 words.
Performing CHANGE (or UNDO) operation requires unpacking the node to the
explicit representation while COPY operation simply requires copying 2
words. Table~\ref{tab:enhancementResults} presents the results of these
experiments (first two lines). Each value in the table is the average
runtime of using BRFS over a set of 50 instances of the 15 puzzle located
22 moves away from the goal. All the experiments in this paper were run on
2GHz Core2Duo PC with 2GB of memory.

\begin{table}[tb]
\small
\begin{tabular}{|c|c|c|c|}
    \hline
Goal test & Representation & UDD & CDD  \\
\hline
At expansion & Regular & 24.5 sec & 26.8 sec \\
At expansion & Packed & 51.0 sec & 40.1 sec \\
\hline
At generation & Regular & 14.1 sec & 15.6 sec \\
At generation & Packed & 31.0 sec & 24.4 sec \\
\hline
\end{tabular}
\caption{CDD and UDD with Different representations}
\label{tab:enhancementResults}
\end{table}


For the explicit representation, UDD outperforms CDD while for the packed
representation, CDD is more efficient. This is explained by the UNDO and
COPY steps in each representation. In the explicit representation, an UNDO
step contains a change in two cells of the state array (switching between
the blank and a tile) while a COPY step requires copying all the 16 cells.
By contrast, in the packed representation an UNDO operation requires
packing back the parent state and performing UNDO for all the generated
nodes is more costly than copying some of the generated node. The last two
lines of this table will be treated later.

%Another interesting observation is that although the packed representation
%is more memory efficient, it is more time consuming. This is explained by
%the extra steps that are performed during activities such as COPY and
%UNDO. Since COPY is one of the most common operation throughout the
%search, these extra steps have a meaningful impact on the total execution
%time.


\subsection{Late Duplicate Detection} \label{sec:lateDuplicateDetection}
In standard BFS implementation, DD is performed as soon as a node is
generated. Korf in his seminal work on disk-based graph search summarized
in \cite{korf2008linearTimeDisk} argued that if DD requires searching
external memory it will be highly inefficient to perform it at every node
generation. He suggested the {\em delayed duplicate detection} mechanism
(DDD) for disk-based BFS searches. BFS with DDD stores OPEN and CLOSE
intermixed on a single list. During the expansion step, all nodes with a
specific cost are expanded, and their successors are added to the end of
the node list without checking for duplicates. The nodes are then sorted
by their state representation, and thus duplicate nodes representing the
same state have adjacent locations. By linear scanning the sorted list
periodically (e.g., after expanding an entire level in BRFS), all
duplicate nodes are removed, while saving a single copy of each node with
the minimum cost. \cite{korf2008linearTimeDisk} also describes a possible
way of implementing this mechanism in an in-memory BFS algorithm.

\begin{algorithm}[tb]
\SetLine \KwIn{$n$, the \emph{best} node in the open list}
\uIf{duplicateDetection($n$)=True}{
    return \tcc*[l]{Node already in CLOSE}
} \uIf{goalTest($n$)=True}{
    halt \tcc*[l]{Goal was found}
}
insert $n$ to closed list \\
\ForEach{state operator $op$}{
    $child \leftarrow$ generateNode($op,n$) \\
    insert $child$ to open list
} \caption{Pseudo code for late duplicate detection}
\label{alg:lazyDuplicateDetection}
\end{algorithm}

We propose an alternative way to delay the DD operation in an in memory
implementation. Our method is only a slight modification of the regular
implementation of BFS. Instead of performing DD on newly generated nodes,
the DD can be performed later when a node is chosen for expansion. To
distinguish from Korf's DDD we call this variation {\em late duplicate detection} (LDD). Algorithm~\ref{alg:lazyDuplicateDetection}
presents the pseudo code for BFS with LDD. Every generated child is
inserted to OPEN and nodes are checked for duplicates only when expanded.




The expected time benefit of performing LDD is twofold. First, the
constant time per node generation decreases without the need to perform a
costly DD which involves calculating the hash value as well as costly
pointer chasing in the collision list. Since many nodes are generated
without being expanded (all the nodes in OPEN when the goal node is
found), this is significant time saving. Second, the most important
advantage of LDD is that the DD test is performed only on CLOSED, instead
of both OPEN and CLOSED. Therefore, OPEN need not be stored in a hash
table and a simple implementation of a priority queue (e.g. a heap) is
sufficient. As a consequence insertion and extraction from OPEN should be
faster. Furthermore, memory can be saved too as OPEN nodes no longer need
to be stored in the hash table which may require additional pointer for
each item. All this comes at the cost that duplicate versions of a node
can exist in OPEN, thus wasting memory. But, if the number of small cycles
and duplicates is small, LDD might reduce both memory and time.

{\bf Experimental results} The first two lines of
table~\ref{tab:lookaheadBreadthTilePuzzle} present the results of running
standard BRFS and BRFS with LDD. Each value in the table is the average
over the same set of 50 instances of the 15 puzzle. The \emph{expanded}
column shows the number of nodes expanded during the search. The
\emph{open} column shows the number of nodes inserted into OPEN, and the
\emph{time} column shows the runtime in seconds. LDD inserted  10\% more
nodes to CLOSED and 20\% more nodes to OPEN. However, since OPEN nodes
needed less constant memory the total memory was reduced as well. The
number of nodes stored in the hash tabled s written in {\bf bold}: only
11,164,851 for LDD Vs. 19,473,242 for standard DD. Since each item in the
hash table requires its own pointer, this dominates the additional
duplicate nodes that were inserted to OPEN with LDD. Moreover, due to
light constant time per node LDD improved the total runtime from 16.62 to
13.90 seconds.
% [[TODO TK: HOW COME THE NUMBER OF EXPANDED NODES ISN'T THE SAME?]]




\section{Early Goal Test During Child Generation}
\label{sec:goalTestDuringChildGeneration}

{\em Goal Test} is an important primitive step in any search algorithm
used to identify a goal node. When such node is identified, the search
usually halts. Textbooks (such as the one provided above) usually teach
that a {\em goal test} in BFS is performed only when a node is chosen for
expansion and is extracted from OPEN. This is valid for all variations of
BFS (including BRFS). However, good implementers of BRFS usually recognize
that the search can stop earlier if a goal test is performed when a node
is generated. We call this {\em early goal test}. This reduces both time
and memory because the entire round in which this node goes through OPEN
(implemented as a FIFO queue in BRFS) until it reaches its front and
extracted can be omitted. This saves the processing of a full level of the
tree. For example, we have found that this improvement alone reduces
memory consumption of BRFS on the 15-puzzle by 36\%. In table
\ref{tab:enhancementResults} we also provide results for CDD and UDD for
the case when early goal test is performed. Comparing this to regular goal
test shows a reduction of almost a factor of 2 in the time.

Unlike BRFS, for the general case of BFS (e.g., A*) generating a goal does
not guarantee that the optimal solution has been found because a goal with
a lower cost can be identified later in the process. Thus, implementers
usually follow the textbooks and only halt the search while a goal node is
chosen for expansion. However, as recognized by some researches
\cite{hansen2007anytimeHeuristic}\cite{likhachev2004ara*:anytime}\cite{ikeda1999EnahncedA}\cite{zhou2003sparseMemoryGraph}
early goal test is valid for general BFS too.

In the case of BFS, early goal test is performed on generated node and the
cost of the best solution found so far is used upper bound, $UB$,
(initialized to $\infty$). Once the goal has been generated, the role of
the following expansion cycles is to verify that no better solution
exists. This is verified when the cost of the best node in OPEN is
\emph{greater than or equal to} $UB$ and only nodes with cost {\em less
than} $UB$ should be expanded. Therefore, as soon as the goal node has
been generated with a cost of $c$, we can immediately delete from OPEN all
nodes with costs {\em greater than or equal to} $c$.  In addition, any new
generated node with such costs can be pruned immediately as it cannot lead
to a better solution. Significant amount of time and memory can be saved
this way.

\begin{algorithm}[tb]
\SetLine \KwIn{$v$, the \emph{best} node in the open list} \KwIn{$UB$, an
upper bound (initialized with $\infty$)}

\uIf{cost($v$) $\geq$ UB}{
    halt \tcc*[l]{Optimality verified}
}
insert $v$ to CLOSED \\
\ForEach{operator $op$}{
    $child \leftarrow$ generateNode($op,v$) \\
    \uIf{cost($child \geq UB$)}{
        continue \tcc*[l]{Prune the node}
    }
    \uIf{goalTest($child$)=True}{
        $UB$=min($UB$,cost($child$))\\
        Delete nodes with $cost \geq UB$ from OPEN.
    }
    \eIf{duplicateDetection($child$)=False}{
        insert $child$ to OPEN
    }{ % Else
        update cost function of $child$ (if required)
    }
} \caption{Expansion cycle of BFS with early goal test}
\label{alg:childGoalTest}
\end{algorithm}

Algorithm~\ref{alg:childGoalTest} presents the pseudo code of a node
expansion cycle with the early goal test enhancement. We believe that this
version of performing goal test when generating a node should become the
standard way for describing BFS and BRFS and hope that textbooks will
adopt it. This idea of early goal test is generalized later in our new BFS
with lookahead algorithm.


%It is very clear that performing the goal test at the node generation is
%much more efficient in the 15-tile puzzle. This result is not surprising,
%since in Breadth-First Search as soon as a goal is encountered the search
%immediately halts and the optimal solution is found. Thus generating the
%children of the entire level of the goal node is avoided, resulting in a
%major speedup of approximately 36\%.

{\bf Experimental results:} The last two lines of table
\ref{tab:enhancementResults} show UDD and CDD results for both
representation methods when running  BRFS with early goal test. It is easy
to see that performing a goal test during node generation is much more
efficient, saving approximately 36\% for the explicit state representation
and about 40\% for the packed state representation for both UDD and CDD.
Results for general BFS with early goal test will be presented below as
special case of our new algorithm BFS with lookahead.



%1 & 5,978,496 & 12,566,615 & 11,305,305 & 1,261,311 & 12,566,619 & 5,326,811 & 11,305,305 & 5,978,496 & 9.56 \\
%1+dd & 5,902,723 & 12,407,769 & 5,902,723 & 11,792,275 & 18,310,493 & 5,902,723 & 615,497 & 12,407,780 & 8.94 \\
%2 & 3,138,531 & 6,603,078 & 5,978,495 & 624,584 & 20,674,663 & 2,839,966 & 5,978,495 & 3,138,531 & 6.10 \\
%2+dd & 3,098,076 & 6,518,217 & 3,098,076 & 6,219,992 & 9,616,294 & 3,098,076 & 298,227 & 20,408,844 & 6.40 \\
%3 & 1,634,463 & 3,441,338 & 3,138,530 & 302,810 & 26,483,929 & 1,504,067 & 3,138,530 & 1,634,463 & 4.80 \\
%3+dd & 1,613,020 & 3,396,301 & 1,613,020 & 3,250,590 & 5,009,322 & 1,613,020 & 145,713 & 26,136,805 & 5.09 \\
%4 & 845,867 & 1,782,388 & 1,634,462 & 147,927 & 31,085,133 & 788,597 & 1,634,462 & 845,867 & 4.26 \\
%4+dd & 834,616 & 1,758,730 & 834,616 & 1,689,394 & 2,593,348 & 834,616 & 69,338 & 30,671,768 & 4.86 \\
%5 & 434,590 & 916,310 & 845,866 & 70,445 & 34,966,998 & 411,278 & 845,866 & 434,590 & 4.27 \\
%5+dd & 428,722 & 903,953 & 428,722 & 870,394 & 1,332,676 & 428,722 & 33,561 & 34,494,464 & 4.72 \\
%6 & 222,132 & 468,671 & 434,589 & 34,083 & 38,537,864 & 212,460 & 434,589 & 222,132 & 4.31 \\
%6+dd & 219,098 & 462,281 & 219,098 & 446,650 & 681,380 & 219,098 & 15,634 & 38,011,207 & 5.11 \\
%7 & 112,758 & 238,020 & 222,131 & 15,891 & 41,900,800 & 109,374 & 222,131 & 112,758 & 4.60 \\
%7+dd & 111,195 & 234,730 & 111,195 & 227,240 & 345,926 & 111,195 & 7,492 & 41,320,144 & 5.21 \\
%8 & 56,961 & 120,366 & 112,757 & 7,609 & 45,243,397 & 55,797 & 112,757 & 56,961 & 4.78 \\
%8+dd & 56,162 & 118,685 & 56,162 & 115,281 & 174,848 & 56,162 & 3,407 & 44,611,419 & 5.79 \\
%9 & 28,568 & 60,425 & 56,960 & 3,465 & 48,437,217 & 28,394 & 56,960 & 28,568 & 5.21 \\
%9+dd & 28,161 & 59,567 & 28,161 & 57,978 & 87,729 & 28,161 & 1,591 & 47,751,839 & 5.90 \\
%10 & 14,258 & 30,184 & 28,567 & 1,619 & 51,578,154 & 14,311 & 28,567 & 14,258 & 5.37 \\
%10+dd & 14,053 & 29,750 & 14,053 & 29,099 & 43,804 & 14,053 & 653 & 50,841,064 & 6.55 \\
%11 & 7,050 & 14,922 & 14,257 & 667 & 54,304,459 & 7,209 & 14,257 & 7,050 & 5.78 \\
%11+dd & 6,947 & 14,704 & 6,947 & 14,412 & 21,651 & 6,947 & 294 & 53,516,968 & 6.62 \\
%12 & 3,467 & 7,347 & 7,049 & 299 & 56,902,134 & 3,584 & 7,049 & 3,467 & 5.92 \\
%12+dd & 3,415 & 7,239 & 3,415 & 7,136 & 10,655 & 3,415 & 105 & 56,082,820 & 7.23 \\
%13 & 1,682 & 3,572 & 3,466 & 108 & 58,866,868 & 1,786 & 3,466 & 1,682 & 6.25 \\
%13+dd & 1,656 & 3,518 & 1,656 & 3,477 & 5,175 & 1,656 & 43 & 58,027,519 & 7.20 \\
%14 & 808 & 1,724 & 1,681 & 44 & 60,560,268 & 874 & 1,681 & 808 & 6.29 \\
%14+dd & 795 & 1,697 & 795 & 1,685 & 2,493 & 795 & 13 & 59,752,840 & 7.72 \\
%15 & 385 & 820 & 807 & 14 & 61,653,712 & 424 & 807 & 385 & 6.58 \\
%15+dd & 379 & 807 & 379 & 804 & 1,186 & 379 & 5 & 60,955,428 & 7.58 \\
%16 & 182 & 388 & 384 & 5 & 62,489,184 & 203 & 384 & 182 & 6.48 \\
%16+dd & 180 & 381 & 180 & 383 & 562 & 180 & 0 & 62,087,042 & 8.02 \\
%17 & 86 & 181 & 181 & 1 & 62,764,526 & 97 & 181 & 86 & 6.69 \\
%17+dd & 84 & 178 & 84 & 180 & 264 & 84 & 0 & 63,017,794 & 7.79 \\
%18+dd & 39 & 82 & 39 & 84 & 123 & 39 & 0 & 64,601,378 & 8.15 \\
%19+dd & 18 & 37 & 18 & 39 & 56 & 18 & 0 & 67,974,787 & 7.88 \\
%20+dd & 7 & 16 & 7 & 18 & 24 & 7 & 0 & 75,161,029 & 8.47 \\
%21+dd & 3 & 5 & 3 & 7 & 9 & 3 & 0 & 90,470,373 & 8.40 \\
%22+dd & 1 & 1 & 1 & 3 & 3 & 1 & 0 & 123,085,920 & 9.90 \\
%\hline
%\end{tabular}
%}}
%\label{tab:breadthFirstLookaheadTilePuzzle}
%\caption{Results on breadth-first search with lookahead on the 15-tile puzzle}
%\end{table*}


%in section. %~\ref{sec:experimentalResultsLookahaed}.


%

%This idea is expected to save memory as simpler data structures are
%needed. Traditional for the CLOSE and the OPEN lists are implemented as a
%hash with extra pointers to link the OPEN list nodes. For delayed
%duplicate detection however, the OPEN list can be implemented as simple
%list and therefore eliminate the need for maintaining the extra pointers.
%This can also save time during node insertion to the OPEN list. Another
%advantage is that the duplicate detection is done on the expanded nodes.
%Since the duplicate detection activity is time consuming and since it is
%done on less nodes than the traditional BFS, the total execution time is
%expected to decrease.

\section{Best-First Search with Lookahead}
%

%The idea of testing generated nodes for goal, as opposed to performing
%this test before node expansion algorithm, is straightforward and trivial
%for Breadth First Search. The optimality of the algorithm remains due to
%the fact that it uses the node level as the cost function. Since the
%search graph is scanned level by level, the OPEN list contains nodes in
%level K, and may contain node in level K+1. If an expanded node in level K
%generates a goal node in level K+1, the search can end because the
%following nodes in the OPEN list may have the potential to generate only
%goals with similar or higher cost. In fact, ending the search when a goal
%is generated, instead of expanded, simply prevent the search from
%expanding nodes that clearly do not lead to an optimal solution.

%In this section we show how the enhancements discussed above
%allow us to develop a novel DFS-based lookahead extension to BFS. We
%analyze this extension and show that it yields a substantial improvement
%in both memory consumption and execution time in several domains.

Based on the different ideas presented above we now turn to describe our
new algorithm, {\em BFS with lookahead}  (BFSL) which generalizes these
ideas and introduces a novel combination of limited DFS and BFS. For
simplicity we begin by describing it on breadth-first search and then move
to the more general case of best-first search.

\subsection{Breadth-first Search with Lookahead}


\begin{algorithm}[tb]
\SetLine \KwIn{$n$, the \emph{best} node in the open list}

\ForEach{state operator $op$}{
    $child \leftarrow$ generateNode($op,n$) \\
     \uIf{DFS-with-GoalTest(child,k)=True}{
        halt \tcc*[l]{Goal was found}
     }
    \eIf{duplicateDetection($child$)=False}{
        insert $child$ to OPEN
    }{ % Else
        update cost function of $duplicate$ (if required)
    }
insert $n$ to CLOSED \\
} \caption{Expansion cycle of BRFSL($k$)}
\label{alg:breadthFirstSearchWithLookahead}
\end{algorithm}


BRFS with lookahead ($BRFSL(k)$) combines BRFS with limited DFS search to
depth $k$. As in ordinary BRFS, BRFSL maintains both OPEN and CLOSED. At
each iteration, node $n$ with smallest depth in OPEN is extracted.
However, before adding $n$ to CLOSED a lookahead to depth $k$ is performed
by applying a limited DFS to depth $k$ from $n$. Goal test is only
performed on nodes at depth $k$. If a goal is found during the lookahead,
the algorithm halts immediately. If a goal is not found, node $n$ is added
to CLOSED, and its immediate successors are added to OPEN.
Algorithm~\ref{alg:breadthFirstSearchWithLookahead} presents the pseudo
code for BRFSL. Line 3 performs a limited DFS  from $child$. If a goal is
found during this limited DFS, then $lookaheadGoalTest$ return $True$.


An exception is the first step of expanding the root. In this case we do
not stop when a goal is reached but continue the DFS to verify that no
goal exists at shallower levels. In fact, at this step, iterative
deepening (or a limited BRFS) can be performed in order to either find a
goal at depth {\em less than or equal to $k$} or to verify that no goal
exists at these levels. Note that ordinary BRFS is the special case of
BRFSL(0), while BRFS with early goal test described above is is the
special case of BRFSL(1).


\begin{figure}[tb]
\centering
\includegraphics[width=9cm]{lookaheadBreadthFirstExample.eps}
\caption{Example of BRFSL(2)} \label{fig:lookaheadBreadthFirstExample}
\end{figure}

%[[AF: the pictures can be improved too]]

%We do not specify the solution path reconstructing method, as any well
%known technique such as divide-and-conquer can be used in this case.

Three expansion cycles of BRFSL(2) are illustrated in
Figure~\ref{fig:lookaheadBreadthFirstExample}. Dark nodes indicate the
expanded nodes, light nodes are the nodes visited during lookahead steps,
and nodes with bold outlines were also goal tested. In the first expansion
cycle (a), a DFS is performed from the root node $A$ to depth 2. Since
this is the first step, a goal test is performed for all nodes. Assuming
no goal was found in this cycle, the algorithm will add the direct
successors of the root, $B$ and $C$, to OPEN. In the next iteration (b),
node $B$ is expanded. Before generating its immediate successors ($D$ and
$E$), a lookahead to depth 2 is performed, where only nodes at the deepest
lookahead level are tested for goal (nodes $H$,$I$,$J$ and $K$). Assuming
no goal is found $D$ and $E$ are added to OPEN and a new iteration starts
(c).

\subsubsection{Completeness and Optimality}

It is easy to see that the algorithm is complete, i.e., it always finds a
goal if one exists. The algorithm also provides the optimal solution.
Assume the goal nodes is at depth $d$ and the lookahead is performed to
depth $k$. When expanding nodes at depth $d-k$ we are actually peeking at
nodes at depth $d$ for the first time. Since nodes at depth smaller than
$d$ where peeked at earlier expansions, when a goal node is found at depth
$d$ it is optimal.

\subsubsection{Memory complexity}

Assume that the brute-force branching factor of the state space is $b$ and
that $b_e$ is the {\em effective} branching factor, i.e., the number of
unique successors after applying DD. It is important to note that
$b_e\leq{}b$, due to cycles in the state space. Now, assume that the depth
of the solution is $d$ and the lookahead is performed to depth $k$. When
the goal node is found only nodes up to depth $d-k$ are stored in OPEN and
CLOSED. Thus, the space complexity of $BRFSL(K)$ (in the worst case, where
the goal node is found below the last node at depth $d-k$) is
\[1+b_e+b_e^2+b_e^3+ \dots +b_e^{d-k}=O(b_e^{d-k})\] Clearly
this requires less space than standard BRFS (= BRFSL(0)) which requires
$O(b_e^d)$ memory.\footnote{Strictly speaking, $b_e$ is not constant and
may vary during the depth of the search. However it is always less than or
equal to $b$, and thus our analysis is valid.}

\subsubsection{Time complexity}

We differentiate between two types of nodes: \emph{expanded} nodes and
\emph{visited} nodes. \emph{Expanded} nodes are nodes that are expanded
from OPEN. Based on the space complexity analysis above there are
$1+b_e+b_e^2+b_e^3+ \dots +b_e^{d-k}$ such nodes. \emph{Visited} nodes are
nodes that are visited during the lookahead DFS calls. A single DFS
lookahead search to depth $k$ visits $1+b+b^2+b^3+ \dots +b^{k}=O(b^k)$
nodes. We perform such lookaheads for all expanded nodes and thus the
total number of nodes that are visited during all DFS lookaheads is
$(1+b_e+b_e^2+b_e^3+ \dots +b_e^{d-k}) \times (1+b+b^2+b^3+ \dots +b^{k})$
this amounts to $O(b_e^{d-k}\times{}b^{k})$. This is larger than the
number of nodes visited by ordinary BRFS ($=1+b_e+b_e^2+b_e^3+ \dots
+b_e^d=O(b_e^d)$) for two reasons. First, unlike regular BRFS where every
node is visited only once, in BRFSL(k) every node from depth larger than
$k$ and lower than $d$ is visited at least $k$ times during the lookaheads
of previous levels. Second, in the DFS lookaheads we do not prune
duplicate nodes and if the state space contains cycles, duplicate nodes
are visited via different subtrees. In other words, for the DFS steps we
use $b$ as the base of the exponent while for the BRFS steps we use $b_e$.

However, in practice $BRFSL(k)$ may run faster than BRFS for a number of
reasons. First as in BRFS with early goal test, we stop the search when a
goal is reached during a lookahead. Second, the lookahead nodes can be
generated by only performing CHANGE operations on the expanded node. This
avoids the need to COPY an entire state for every node in the lookahead,
yielding an improved runtime (similar to UDD above). Therefore, the
constant time per node during the lookahead may be much smaller than the
constant time required to completely generate a node and insert it to
OPEN.  In addition, no duplicate detection check is done at the DFS stage
which also saves time since DD check might be time consuming. Thus, for
small values of $k$, $BRFSL(k)$ might run faster than BRFS(0).



%Formalizing these considerations, let $C_exp$ be the cost of expanding a node, and $C_IG$ be the cost of generating it during the lookahead. $B$ is the branching factor and $EB$ is the effective branching factor described above. If the goal is at depth $d$, the total amount of nodes expanded in BRFSL($k$) is $EB^{d-k}$. Thus the total cost is $EB^(d-k)\times{}C_exp+ EB^{d-k}\times{}B^{k}\times{}C_IG$. Comparing this to standard BRFS, where the total cost is $EB^d\times{}C_exp$, it is clear that for extremely large $k$ ($\lim{k} \rightarrow \infty}$) the standard BRFS will be more efficient. As $k\rightarrow{}d$

%However for smaller $k$

When increasing $k$ more nodes are visited by $BRFSL(k)$ because of the
two reasons above (duplicates and overleaping lookaheads). At some point,
this will dominate the fact the constant time per node is smaller. The
optimal value for $k$ is domain dependent and is strongly related to the
rate of duplicates and cycles in the domain and to the constants involved.



\subsubsection{Experimental Results}
\label{sec:experimentalResultsLookahaed}



\begin{table}[tb]
\small
\begin{tabular}{|c|r|r|r|r|}
\hline
\multicolumn{5}{|c|}{{\bf Fifteen Puzzle}}\\
\hline
k & Expanded & Opened & DFS & Time \\
\hline
0 & 10,373,537 & {\bf 19,473,242} & 0 & 16.62 \\
0+LDD & {\bf 11,164,851} & 23,449,495 & 0 & 13.90\\
\hline \hline
1 & 5,978,496 & {\bf 11,305,305} & 12,566,619& 9.56 \\
1+LDD & {\bf 5,902,723} & 18,310,493 & 12,566,619 & 8.94\\
\hline
2 & 3,138,531 & {\bf 5,978,495} & 20,674,663 &6.10 \\
2+LDD & {\bf 3,098,076} & 9,616,294 & 20,674,663 & 6.40\\
\hline
3 & 1,634,463 & {\bf 3,138,530}& 26,483,929 &4.80 \\
4 & 845,867 & {\bf 1,634,462}& 31,085,133 &{\bf 4.26} \\
5 & 434,590 & {\bf 845,866}& 34,966,998 &{\bf 4.25} \\
6 & 222,132 & {\bf 434,589} & 38,537,864 &{\bf 4.31} \\
7 & 112,758 & {\bf 222,131} & 41,900,800 &4.60 \\
8 & 56,961 & {\bf 112,757} & 45,243,397 &4.78 \\
9 & 28,568 & {\bf 56,960} & 48,437,217 &5.21 \\
10 & 14,258 & {\bf 28,567} & 51,578,154  &5.37 \\
11 & 7,050 & {\bf 14,257} & 54,304,459 &5.78 \\
12 & 3,467 & {\bf 7,049} & 56,902,134 &5.92 \\
13 & 1,682 & {\bf 3,466} & 58,866,868 &6.25 \\
14 & 808 & {\bf 1,681} & 60,560,268 &6.29 \\
15 & 385 & {\bf 807} & 61,653,712  &6.58 \\
16 & 182 & {\bf 384} & 62,489,184 &6.48 \\
17 & 86 & {\bf 181} & 62,764,526 &6.69 \\
18 & 40 & {\bf 85} & 62,911,950 &6.58 \\
19 & 18 & {\bf 39} & 63,225,908  &6.55 \\
20 & 8 & {\bf 17} & 63,894,484 &6.35 \\
21 & 3 & {\bf 7} & 65,318,508 &6.00 \\\hline
\multicolumn{5}{|c|}{{\bf (12,4) TopSpin Puzzle}}\\
\hline
0 & 1,845,383 & {\bf 12,575,891} & 0 &17.35 \\
1 & 258,032 & {\bf 1,854,382} & 3,096,386 &3.31 \\
2 & 34,887 & {\bf 258,031} & 5,442,500 &{\bf 2.99} \\
3 & 4,592 & {\bf 34,886} & 8,651,828 &4.5 \\
4 & 583 & {\bf 5,591}& 13,207,502 & 6.68 \\
 \hline
\end{tabular}
\caption{Results of BRFSL.} \label{tab:lookaheadBreadthTilePuzzle}
\end{table}

We experimented with BRFSL(k) on the 15 puzzle, for different values of
$k$. Table~\ref{tab:lookaheadBreadthTilePuzzle} presents the results
averaged on our 50 depth-22 instances. The $k$ column shows the lookahead
depth (where $k=0$ is ordinary BRFS and $k=1$ is simply BRFS with early
goal test). The \emph{Expanded} column shows the number of nodes expanded
during the search, the \emph{Opened} column shows the number of nodes
inserted into OPEN, the DFS column shows the number of nodes visited
during the DFS lookahead phase and the \emph{time} column shows the
runtime in seconds. As expected, larger values of $k$ constantly reduce
the memory needs (= number of nodes in the hash table labeled in {\bf
bold}) compared to ordinary BRFS (BRFSL(0)). Furthermore, for all values
of $k$ the search was also faster than BRFSL(0). Optimal time behavior was
for $k=5$  where the time speedup was a factor of 4 and the memory
reduction was by a factor of 20.



Similar results were achieved for 50 instances of TopSpin(12,4) at depth
14. They are presented in the bottom of the table. In TopSpin, the optimal
lookahead depth was 2 (compared to 5 in the 15 puzzle). This is because
TopSpin contains smaller cycles.
% RS+TK: TAMAR WILL COMPLETE THE DATA


The table also shows results for combining LDD and BRFSL for small $k$
values. Using LDD visited more nodes but a smaller number of nodes were
stored in the hash table (labeled in {\bf bold}). This reduced the overall
memory needs .In addition LDD yielded a speedup in the runtime.

\section{A* with Lookahead}

We now generalize this concept to a general best-first search with lookahead, using A* with lookahead (AL*) as an example.

\subsection{Trivial Lookahead}
\label{sec:trivialLookahead}

%\begin{figure}[h]
%\centering
%\includegraphics[width=4cm]{lookahead0example.eps}
%\caption{Example of A* with trivial lookahead.}
%\label{fig:lookahead0example}
%\end{figure}

In BRFS the cost function is the depth of the node ($d$). Thus, when expanding a node with cost $x$ we always generate nodes with cost $x+1$. By contrast, in A* the cost function is ($f=g+h$) and if $h$ decreases, a generated node may have the same cost as its parent. In this case, the generated node is entered to
OPEN, but can then be immediately expanded. Inserting to OPEN is a costly
operation as priority queues times are generally $O(log(n))$. Thus, we
suggest a simple enhancement to A* where generated nodes that have the
same cost as their parents (just expanded), are immediately expanded too.
This is done as follows. Assume that node $v$ with cost $c$ is extracted
from OPEN. We now perform a DFS rooted at $v$. Successors with a cost of
$c$ are immediately goal tested and added to CLOSED. When successors with cost greater than $c$ are encountered, the DFS backtracks and these nodes are generated and added to OPEN. It is important to note that this enhancement is only valid if the cost function is {\em monotonic increasing} along any branch.
In domains where many successors share the same cost function, this
enhancement is expected to substantially reduce execution time, as we bypass OPEN for many nodes in comparison to the classic A*. Note that this enhancement
makes BFS more like BRFS, where each expanded node adds nodes with the
next largest cost. We denote this enhancement as {\em trivial} lookahead.



%Algorithm~\ref{alg:bestFirstSearchWithTrivialLookahead} presents the pseudo code for this enhancement.

%\begin{algorithm}[tb]
%\SetLine \KwIn{$v$, the \emph{best} node in the open list} \KwIn{$UB$, an
%upper bound, initialized with $\infty$}

%\uIf{cost($v$) $\geq$ $UB$}{
%    halt \tcc*[l]{Optimality verified}
%}
%insert $v$ to closed list \\
%\ForEach{operator $op$}{
%    $child \leftarrow$ generateNode($op,v$) \\
%    \uIf{cost($child \geq UB$)}{
%    continue \tcc*[l]{Prune the node}
%    }
%   \uIf{goalTest($child$)=True}{
%        $UB$=cost($child$)\\
%        Delete nodes with $cost \geq UB$ from Open.
%   }
%    \eIf{cost($child$)=cost($v$)}{
%        expand($child$)
%    }{ % Else
%        \eIf{duplicateDetection($child$)=False}{
%            insert $child$ to open list
%        }{ % Else
%            update cost function of $child$ (if required)
%        }
%    }
%} \caption{BFS with trivial lookahead}
%\label{alg:bestFirstSearchWithTrivialLookahead}
%\end{algorithm}




%This enhancement saves redundant insertions to the open list at the expense of simply mimics the way the classic A* explore the graph, and simply skip the redundant step of adding nodes with the same f-cost as their predecessors to the OPEN list and immediately deleting them from the OPEN list and adding it to the CLOSE list. As a result of this enhancement, best first search is more like breadth first search where each expanded node adds nodes with the next largest f-cost.

\begin{figure}[tb]
\centering
\includegraphics[width=5cm]{aLookaheadExample.eps}
\caption{Example of an AL* iteration.} \label{fig:aLookaheadExample}
\end{figure}

For example, consider the graph in Figure~\ref{fig:aLookaheadExample}. The
search starts at node $A$ and the goal node is $I$. Using trivial
lookahead, nodes $C,F$ and $J$ will be immediately expanded when expanding
node $A$ and inserted straight to CLOSED, saving the cost of inserting
and extracting them from OPEN. Similarly, node $H$ will be immediately
expanded when expanding node $D$.

\subsection{A* with Arbitrary Lookahead}

\begin{algorithm}[t]
\SetLine \KwIn{$v$, the \emph{best} node in the open list}
\KwIn{$UB$, an upper bound, initialized with $\infty$}

\uIf{cost($v$) $\geq$ UB}{
    halt \tcc*[l]{Optimality verified}
}
insert $v$ to closed list \\
\ForEach{operator $op$}{
    $child \leftarrow$ generateNode($op,v$) \\
    \uIf{cost($child \geq UB$)}{
    continue \tcc*[l]{Prune the node}
    }
   \uIf{goalTest($child$)=True}{
        $UB$=cost($child$)\\
        Delete nodes with $cost \geq UB$ from Open.
   }
    \eIf{cost($child$)=cost($v$)}{
        expand($child$)   /* Trivial lookahead */
        }{ % Else
       \eIf{duplicateDetection($child$)=False}{
           insert $child$ to open list
       }{ % Else
           update cost function of $child$ (if required)
       }
       $maxDepth \leftarrow min(UB, cost(v)+k)$ \\
        \uIf{cost($child \geq maxDepth$)}{
            lookahead($child$,$maxDepth$,$UB$)\\
				/* lookahead call possibly updating $UB$ */
        }
    }
} \caption{Expansion cycle of A* with lookahead}
\label{alg:bestFirstSearchWithLookahead}
\end{algorithm}

\begin{procedure}[ht]
\KwIn{$v$, the current node in the DFS lookahead search}
\KwIn{$maxDepth$, the limit of the lookahead}
\KwIn{$UB$, an upper bound of the goal cost}

\ForEach{operator $op$}{
    $child \leftarrow$ generateNode($op,v$) \\
    \uIf{cost($child \geq maxDepth$)}{
    return
    }
   \eIf{goalTest($child$)=True}{
        $UB$=cost($child$)\\
   }{
        lookahead($child$,$maxDepth$, $UB$)
    }
}
\caption{lookahead()} \label{alg:bestFirstSearchLookaheadFunction}
\end{procedure}

Extending the trivial lookahead to an arbitrary lookahead of $k$ (denoted AL*($k$)) is done as follows. Assume that the cost of the expanded node is $c$. We perform a trivial lookahead, immediately expanding nodes with cost $c$. When a successor with a cost greater than $c$ is reached, it is inserted to OPEN but a limited DFS is first activated from that node. Each node encountered during this DFS is goal tested. If a goal has been found with cost smaller than the currently best known solution $UB$, then $UB$ is updated. At this stage it is
possible to go over all the nodes in OPEN and remove nodes with cost {\em
larger than or equal to} $UB$.\footnote{While this option saves memory, it
slows the total execution time. We did not implemented it in our
experiments.} The search halts when the optimal goal cost has been
verified, i.e., when the cost of a node chosen for expansion is equal to
$UB$. Pseudo code for AL*($k$) is presented in
Algorithm~\ref{alg:bestFirstSearchWithLookahead}. Note that when $k=0$,
the algorithm describes a trivial lookahead, mentioned in the previous section.
%Section~\ref{sec:trivialLookahead}.

Figure~\ref{fig:aLookaheadExample} illustrates an expansion cycle of AL*($4$), when node $A$ is chosen for expansion. First a trivial lookahead is performed, expanding all successors of $A$ that have the same cost as $A$ (=22). These are nodes $C$,$F$ and $G$, marked with light gray circles. Each child of these nodes with a cost larger than 22 is inserted to OPEN, and a lookahead starts from that node. These nodes are $B$,$K$,$G$ and $D$, marked with dark gray circles. During a lookahead, all successors with cost lower than 26 (22+4) are visited and goal tested. Thus the nodes visited during the lookahead are $E$,$L$,$H$ and $M$,
marked with a doubled circle.

In many domains the costs of nodes increase in a fixed amount. For
example, in the tile puzzle, nodes with cost of $x$ can have children with
costs of either $x$ or $x+2$. In such cases AL*($k$) is very similar to
BRFSL(k). Here we can logically treat all nodes with a given cost as a
level in the search. In BRFS, each level is width 1 while in AL* each
level is of arbitrary width.


\subsection{Experimental Results}
\label{sec:lookaheadAstarResults}

\begin{table}%[tb]
\small
\begin{tabular}{|c|r|r|r|r|r|}
\hline
$k$  & Expanded & Trivial & Opened & DFS & Time \\
\hline
\multicolumn{5}{|c|}{{\bf Fifteen Puzzle 7-8 PDB}}\\
\hline
A* & 21,004 & 0& 41,676 & 0 & 0.30 \\
A*e & 21,004 & 0 & 41,676 & 0 & 0.30\\
0 & 21,004 &17,202 & 25,474 & 0 & {\bf0.14} \\
2 &  18,245&15,181 & 4,602 & 63,163 & 0.24 \\
4 & 2,801 &2,365 & 848 & 136,988 &  0.30 \\
6 & 399 & 336& 141 & 180,275 & 0.38 \\
8 & 55 & 47 & 17 & 202,664 & 0.42 \\
10 & 7 & 6 & 1 & 233,587 & 0.48 \\
\hline\multicolumn{5}{|c|}{{\bf (16,4) Top Spin}}\\
\hline
A* & 662,256 &  0& 5,109,005 & 0 & 45.38 \\
A*e & 662,256 & 0 & 2,226,886 & 0 & 33.10\\
0 & 662,256  & 419,813 & 1,807,073 & 0 & 29.80\\
1 & 524,022 & 365,178 & 1,022,571 & 17,194,709 & 34.06 \\
2 & 68,133 & 48,615& 194,941 & 26,025,625 & {\bf 24.56} \\
3 & 8,095 & 5,935 & 31,728 & 38,403,901 & 33.00 \\
4 & 1,349  & 971& 5,051 & 59,933,594 & 51.27 \\
\hline
\end{tabular}
\caption{AL* with on the puzzles}
\label{tab:lookaheadResults}
\end{table}

We implemented AL* on the 15-puzzle with the 7-8 additive PDB heuristic
\cite{ADBAIJ02} and on the (16,4)-TopSpin with a 9-token PDB heuristic.
Table \ref{tab:lookaheadResults} presents the results averaged over 100
random instances on both puzzles. The $k$ column indicates the threshold
of the lookahead, where $0$ denotes the trivial lookahead. {\em Expanded}
counts the total number of nodes that entered CLOSED while {\em Trivial}
counts the number of nodes that entered CLOSED while bypassing OPEN
(trivial lookahead). Trivial expansions amount to more than 50\% of the
total expansions in both puzzles yielding both memory and time reduction.
The rest of the columns are similar to previous tables.

In both domains using AL* with larger lookaheads yields substantial
improvement in terms of memory. For example, in the 15 puzzle, using a
lookahead of 2 reduces the number stored nodes \footnote{They are the sum
of the {\em Trivial} and {\em Opened} columns} by a factor of 3, and in
TopSpin using a lookahead of 3 reduces the number stored nodes by 2 orders
of magnitude. Additionally, the results show that for many $k$ values
substantial reductions of time can be achieved. The best runtimes over all
the lookaheads is denoted in {\em bold}. For TopSpin, the optimal time was
achieved using lookahead of 2 achieving a 30\% reduction of runtime, while
for the 15-tile puzzle the optimal lookahead was 0, achieving a a factor
of 2 reduction in runtime in comparison to classic A*.

The A*e line refers to A* with early goal test. It is interesting to note
that A*e is identical to A* in the tile puzzle. The reason is as follows.
The goal state has the blank is in the upper left corner and has only two
neighbors, one of them is its parent in the search tree. It can be shown
that for the PDB used, the heuristic only decreases by one when the blank
enters its goal state and thus its $f$-value is the same as its parent. In
such cases, early goal test is meaningless as this node would be expanded
right away. For TopSpin, the goal can have larger $f$-value and many node
generations can be saved (as can be seen in the $Opened$ column).

\section{Conclusion and Future Work}

We have shown a number of ways to enhance BFS and BRFS in order to better
utilize the memory and time. Additionally, we have introduced a novel
approach to incorporate a DFS-based lookahead into BFS algorithms, and in
particular to BRFS and A*. Experimental results supported our direction.
This research is in progress and the following directions are currently
taken.


\subsection{Combining AL* with and BMPX}

Bidirectional pathmax (BPMX) is a method that propagates heuristic values
in any possible direction when inconsistent heuristics are used
\cite{DUAL2005,INCON2007}. BPMX is easily implemented with IDA* or with
any other DFS search as values can easily propagated between nodes and
their neighbuiors during the DFS. However, \cite{ICAPS09} showed that
applying BPMX in A* is much more problematic and only a limited version of
BPMX proved useful within the context of A*. However, when adding DFS
lookaheads we can once again use BPMX in the regular DFS form and the
potential gain is very large. Initial results show promise but we leave
this for a future discussion.

\subsection{Predicting the Optimal Lookahead}

We have shown that by carefully choosing a lookahead depth, both AL* and
BRFSL can achieve substantial runtime speedups. However, we have not yet
developed a technique for calculating the optimal lookahead. Future work
will include analysis of the effects of the lookahead on the runtime and
development of optimal lookahead prediction technique. Additionally, we
also intend to research the option of a variable depth lookahead. This
means learning the domain characteristics throughout the search and
adapting the lookahead depth accordingly. \small
\bibliography{lookahead}
\bibliographystyle{aaai}
\end{document}
