\section{Performance Analysis}
\label{analysis}
% We want to see the size of the plateau. 
% Estimation of the plateau size 
	% In a case where the heuristic function value is the M - g value, 
	% it is not acceptable. It implies a breadth-first search
% Estimation of the tree/graph size
 % 	randomized potential field 
 % 	average branching factor of a graph
 % 	the impact of dead ends
 
 % Establish the link between effective branch factor and plateau size

% moving average    w
%   step :     t

Analyzing the performance of a
heuristic search algorithm applied to a general 
planning problem is complex and difficult. 
In this section, we analyze 
the performance of the proposed RW-BFS 
algorithm on a simplified model
to draw some insights on when 
the stochastic search can improve the overall
efficiency. 
% . This model, although simplified, provides 
% insights for the performance of our proposed algorithm.

To establish our model, we introduce the following two 
definitions.

\begin{defn}[N-Neighbor]
For any state $x \in S$, the {\em N-Neighbor} of $x$, denoted by 
as $N(x, d)$, is a set of states that are reachable from $x$ using $d$ or less number of actions. 
\end{defn}

\begin{defn}[Plateau Graph] 
Given a planning task $\mathcal{T}$, a plateau 
graph $G_{d} = (V, E) $ of state $x$ is a simple digraph 
with $V = N(x, d)$ satisfying: 1) there is an edge $(s_{i}, s_{j}) \in E$ 
if and only if there is an action that leads $s_{i}$ to $s_{j}$; 2) 
for all states $s$ in $V$, $h(s) \ge h(x)$, 
and there are no dead end states in $V$. 
An exit state of $G_{d}$ is a state $s_{e} \notin G_{d}$  
such that $h(s_{e}) < h(x)$. 
\end{defn}

For a given plateau graph $G_{d}$, in the following 
analysis, we compare the number of heuristic evaluations 
required  to escape from $G_{d}$ for both best-first search and random walk 
algorithms, as heuristic evaluation takes up most of the time for search
algorithms. To simplify our analysis, we assume 
that the random walk is unbiased, meaning that instead of picking the best state among all $n$ cases shown in Line 7-8 in Algorithm~\ref{walk}, a random $s'$ is chosen with probability $1/n$ to be $s_{min}$
in Algorithm~\ref{walk}. We further simplify the 
structure of $G_{d}$ as a graph where nodes have the same in- and out-degrees. Without loss of generality, we assume that every node in $G_{d}$ has $p$ successors and $q$ parents, where $p \ge q \in Z^{+}$. We first consider the case where $G_{d}$ is a 
tree, in which case $p > q = 1$. 


% \noindent {\bf Tree search.} 

\paragraph{Tree search. } We provide the following result for best-first search when $G_{d}$ is a tree. 
% for bfs
\begin{lemma}
\label{lem:bfs}
Given a plateau graph $G_{d}$ of $x$ that is a tree, 
if $s_{e}$ is an exit state of $G_{d}$ and $s_{e} \in N(x, d+1) \backslash N(x, d) $, a best-first search procedure, in the worse case where every state in $N(x, d)$ has to be explored before finding $s_{e}$, will evaluate the heuristic function value of $\frac{p^{d+1} -1}{p-1}$ states. 
\end{lemma}

Note that best-first search algorithm expands states according 
to their orders in the $open$ list. However, here in this simplified 
model, we are only interested in states that are in $N(x, d)$ and assume that 
all states in $N(x, d)$ has to be explored. 

For an unbiased random walk, we have the following result. 

% for random walk
\begin{lemma}
\label{lem:rw}
Given a plateau graph $G_{d}$ of $x$ that is a tree, 
let $R$ be an an unbiased random walk procedure
that explores the search space with length $tl$, 
and $E$ be the number of exit states of $G_{d}$ in $N(x, tl) \backslash N(x, tl-1)$, 
if the heuristic function is evaluated every $l$ steps in $R$, 
the expected number of heuristic evaluations
before $R$ find an exit state is $\frac{t p^{tl}}{E}$. 
\end{lemma}

\noindent {\bf Proof.} % It is easy to see that there are $1 + p + \cdots + p^{d}$ states in $G_{d}$. 
Since $G_{d}$ is a tree, starting from $a$, 
$R$ explores states that are exactly $tl$ steps away from $a$.
Since $R$ is unbiased, the probability of hitting an exit state
of $G_{d}$ is therefore $\frac{E}{p^{tl}}$. That is to say, the expected 
number of random walks to find an exit of $G_{d}$ is  
$\frac{p^{tl}}{E}$. Since there are $t$ heuristic evaluations 
on each path, the total number of expected heuristic evaluations is
$\frac{t p^{tl}}{E}$. \qed

%%%% impact of E and p and other things 
%When $\frac{p^{d+1} -1}{p-1} \ge \frac{t p^{tl}}{E}$ and the best-first search procedure have to  expand almost every state in $G_{d}$, w
We see from Lemma~\ref{lem:bfs} and \ref{lem:rw} 
that an unbiased random walk can assist best-first
search finding plateau exit with fewer heuristic evaluations when
$$\frac{p^{d+1} -1}{p-1} \ge \frac{t p^{tl}}{E}.$$
One sufficient condition for the above inequality is 
that $d \le tl \le d + \log_{p} E - \log_{p} t$. Since $E$ is very large and close to $p^{d}$ and $t$ is small, 
the above condition is approximately:
$$ d \le tl \le 2d. $$
In our experiments, we set $t = 4$ and $l = 3 $ to $9$. 
From extensive empirical evaluation, the longest $d$ is about $20$ to $30$. In practice, random walks can be more helpful since it is not unbiased; it is biased towards good states with lower $h$.
One insight drawn from this 
is that $tl$ should be neither too small or too large for the the random walk 
exploration to be helpful.  However, since $d$ is 
unknown during search, it is helpful to try 
different $tl$ values when doing random walk exploration in Algorithm~\ref{findplat}. 
For this reason, we dynamically use different $l$ values in certain range during search. 

\nop{
We can derive a sufficient condition for the above inequality 
by reducing the left side to $p^{d}$. In this case, 
we have $ d \le tl \le d + \log_{p} E - \log_{p} t$. That is to say, 
if $tl$ is in $[d, d+ \log_{p} E]$, an unbiased
random walk $R$ is preferred in finding 
an exit of $G_{d}$. 
}



\nop{
Second, we see that when $p$ and $d$ are 
relatively small, it is still feasible to explore 
every states in $G_{d}$ using best-first search, 
thus, the random-walk algorithm should only be invoked 
when $p$ and $d$ are likely to be big on some plateaus. 
Therefore, in retrospect, it is reasonable 
to using parameter $m$ to decide if the search is on a plateau. 
Here $m$ usually a big integer in the magnitude of thousands. 
It also explains why we have the second plateau condition in place, 
as if  the moving average of $w$ recent states is higher than $h^{*} + \delta$, 
it means most of the states in $a$'s $\delta$-neighbor has been explored without 
finding an exit. 
}

\nop{
Second, the above inequality shows that 
$E$ plays an important role in the performance
of the random walk algorithm. In cases 
where exits are rare, 
}



% if $h$ value is relatively continuos 
%	one sufficient condition is that $tl \ge d$ and $s \ge p^{tl -d -1}$. 
%	Although it is hard to derive a closed-form condition on the bound 
%	of $s$, we can see that $s$ can neither be too small nor to big. 
%	If the above conditions are satisfied, 
%	a random walk algorithm is 
% When G becomes a graph

\paragraph{Graph search. }
Now we extend the above discussion to the 
case where $G_{d}$ is a graph. Namely,
we consider the case where $q > 1$.
% \noindent {\bf Graph search.} 

\begin{lemma}
\label{bfs:graph}
Given a plateau graph $G_{d}$ of $x$ where every 
node in $G_{d}$ has an in-degree of $q \ge 1$, 
if there exists a state $s_{e} \in N(x, d+1)$ such 
that $h(s_{e}) < h(x)$, a best-first search procedure, 
in the worst case, will have to evaluate the heuristic value for $\frac{(p/q)^{d+1} -1}{(p/q-1)}$
states before finding $s_{e}$. 
\end{lemma}

\noindent {\bf Proof.} 
%We only need to prove that 
%$G_{d}$ now have $\frac{(p/q)^{d+1} -1}{(p/q)-1)}$ states, as
%best-first search evaluates the heuristic value of every state in $G_{d}$ in the worst case.  
We prove this by using mathematical induction. 
It is easy to see that this proposition is true when $d =0, 1$, 
where we have $1$ and $1 + \frac{p}{q} $ states, respectively. Assuming this proposition is 
true for all $d < k$, we have $|G_{k-1}| =  \frac{(p/q)^{k} -1}{(p/q)-1)}$  and 
$|G_{k-2}| =  \frac{(p/q)^{k-1} -1}{(p/q)-1)}$. Thus, there 
are $(p/q)^{k-1}$ states that are exactly $k-1$ steps away from $x$.  
Since $G$ is a simple graph, 
according to our assumption, there are $p (p/q) ^{k-1} / q = (p/q)^{k}$
states that are $k $ steps away from $a$. Thus, 
we have $$|G_{k}| = \frac{(p/q)^{k} -1}{(p/q)-1)} + (p/q)^{k} = 
\frac{(p/q)^{k+1} -1}{(p/q)-1)}. $$ Thus this proposition is also 
true for $d = k$. \qed
%According to the induction principle, 
%our lemma is true for all possible $d \in \mathbb{Z}^{+}$. \qed

On the other hand, it is easy to see that the structural change of $G$
does not affect the expected number of heuristic evaluations for 
an unbiased random walk $R$, 
as it does not maintain any information on whether a state has been visited. 
In this case, Lemma~\ref{lem:rw} still holds for $q > 1$. 

We can see that $R$ can help best-first search in finding 
an exit of $G_{d}$ if $$\frac{(p/q)^{d+1} -1}{(p/q)-1} \ge \frac{t p^{tl}}{E}.$$ 
We can derive a necessary condition for the above inequality
by replacing the left side as $(p/q)^{d+1}$. In this case, 
any $tl$ must satisfy $ d \le tl \le d + \log_{p} E - (d +1) (1 -  \log_{p} q )  - \log_{p} t$. 
It is easy to see that that as $q$ increases from $1$ to $p$, 
random walk exploration becomes less effective. The insight is that 
RW-BFS is more effective when
there are not many paths ($q $ is smaller compared to $q$) that can lead to the same state. 

\noindent {\bf Impacts of the dead ends and loops in $G_{d}$.} So far, we assumed there are no dead ends nor loops in $G_{d}$. When there are dead ends and loops in $G_{d}$, because of the 
{\em closed} list on best-first search, having dead ends or loops in $G_{d}$ does not change the number of heuristic evaluations for best-first search. Namely, Lemma~\ref{bfs:graph} is still valid in this case.  

On the other hand, for an unbiased random walk $R$, it is intuitively true 
that loops and dead ends in $G_{d}$ will decrease the probability that a random walk visits
exit states outside $G_{d}$. We omit the detailed discussion here and conclude that 
if there are evenly distributed dead ends and loops in $G_{d}$, given the same restart mechanism presented in Algorithm~\ref{findplat}, the probability that $R$ can find an exit to $G_{d}$ is lower than $\frac{E} {p^{tl}}$ where $E$ is the number of exit states in $N (x, tl)$. In practice, the above statement shows that the random walk exploration is more helpful on problems where there are few loops and dead ends than problems where dead ends and loops are common . 

%%%%%%%%% Text below are all trashed %%%%%%%%%%%
\nop{
In this section, we give discussions 
on how random walk can assist best-first search.
As in both random walk and best-first search 
algorithms, heuristic evaluation takes up 
most of the time, in the following 
analysis, we focus on comparing number of expected 
heuristic evaluations for both algorithms to 
find an exit on a given plateau $P$. 


% {\bf For tree search}

We first consider our model on a state space 
is a tree that has directed edges
from parent to child. 

% This is the simplest case here.

Given state $a$ in $\mathcal{S}$ 
that is on a plateau $P$. 
Since there might be multiple exits
of $P$ starting from $a$, 
we pick a state $s_{e}$, among 
all possible exits of $P$ that $a$ can reach,  that is 
has the shortest path length $d$ from $a$. Given 
state $s_{e}$ and $d$, we have the following lemma.


\begin{lemma}
\label{lem:bfs}
Given a state space $S$ as a tree, best first search algorithm $\mathcal{A}_{h}$, guided by 
heuristic $h$ and an effective branch factor $b_{h}$ 
will in average visit $N_{h} = 1 + b_{h} + b_{h}^{2} + \cdots +  b_{h}^{l-1}$ 
states before finding $s_{e}$. 
\end{lemma}


%The above lemma is evident as $d$ is the smallest
%number of steps that $\mathcal{A}_{h}$ has to explore.
%Given the definition of $h_{b}$ as the average branch 
%factor of $A_{h}$ guided by $h$, 

In the above lemma, effective branch factor $b_{h}$
is used to simplify the node estimation. In 
the above algorithm on a tree, as heuristic functions
are evaluated at least once  at each explored 
state, the total number of heuristic evaluations 
before finding an exit to plateau is also $N_{h}$.
Namely, the expected number of heuristic 
evaluations in this case is $N_{h}$ for finding 
out an exit to $P$. 

Recall that  in our random walk, 
we have three parameters $step$, $n$ and $l$ 
to control the walk each round, and heuristic values
are evaluated every $l$ steps on a walk, we 
can establish the following lemma. 

\begin{lemma}
\label{lem:rw}
For a given planning task $T$ with 
average branch factor $b$. The expected 
number of heuristic evaluations of 
an unbiased random walk starting from state $a$ that has parameters $step$, $n$ 
and $l$ is $step \times \frac{b^{l \times step}}{E(step, l)}$
where $E(l, step)$ is the number of exit 
states that is $l\times step$ steps away from $a$. 
\end{lemma}


an unbiased random walk $r(step, n, l)$ 
explores $b^{step \times l}$ states with equal
probability. Since we have assumed that 
there are $E$ exist states that 
are exactly $step \times l$ steps away, 
the probability of finding an exist state 
is $\frac{E}{b^{l \times step}}$. 
Meanwhile, heuristic evaluation happens
$step$ times for one walk. Therefore, the 
expected number of heuristic evaluation is 
$ step \times \frac{b^{l \times step}}{E}$. 



}
 
% In our implementation, we have several parameters
% Therefore, the expected number of 	
%	Meanwhile, we assume the number of exists within 
%	distance $x$ as $E(x)$. The probability of hitting 
%	an exit state using random walk is $\frac{E(x)}{B^{x}}$
%	where $B$ is the branch factor. Therefore 
%	the number of expected states
%	explored by random walk is $ \frac{B^{x}}{E(x)} $. 

\nop{
Assuming that $P$ an maximum plateau, and $s_{0}$ is the entry of it. % fix definition.
based on our assumption, a best-first search procedure will require $|P|$ heuristic evaluations
before find the exit state $s_{e}$. 

Meanwhile, given state $s_{0}$, we consider the shortest path between 
$s$ and $s_{e}$ and assume he length of it is $d$. Given a search tree with an average branch factor $B$, 
an unbiased random walk walks at most $k$ steps will explore $B^{k}$ states 
each with probability  $\frac{1}{B^{k}}$. However, since 
it only evaluates heuristic function every $l$ time steps, the total heuristic 
evaluation is $\frac{B^{k} -1 }{B^{l} -1 }$. % The expected heuristic evaluation. 
Therefore, as long as $\frac{B^{d}}{l} \le |P|$, using random walk will help. 
[TODO]: model to measure number of states with the same heuristic function. 
%and the path length of the plateau from  the entry to its exit as $d$, we know that there are $B^{d}$ states in this region.

}

% Come to think of it, graph search and tree search are the 
% same under this new model because we use $b_{h}$ as the metric
% effective branch factor incorporates the topology of the search space. 
\nop{
{\bf Graph search}
Now we extend the above discussion on search tree to graph search. 
For best-first search, number of heuristic evaluations is no longer
$b_{h}$

To extend the above model to search instead of tree search, 
the only difference is the number of heuristic evaluations for states in that plateau, since 
states in closed list are skipped for heuristic evaluations. We still assume that the branch 
factor is $B$. However, for each state $s$, there are in average $C$ parents states (states that 
can lead to $s$). In this case, instead of having $B^{d}$ number of states in the plateau, 
we have $\frac{B^{d}}{C}$ states. Therefore, for search on a graph where states are connected 
using multiple paths, the number of heuristic evaluations in best first search is lesser than 
that in tree search. 
}

\nop{
{\bf Impacts of the Dead ends}

Dead end states are states where goals are no longer reachable. In practice, 
planning domains with stringent temporal and logical constraints only 
have dead ends appearing frequently in their search spaces. We introduce a 
metric called dead end ratio $\alpha$ that is defined as $\frac{|\mathcal{D}|}{|\mathcal{S}|} $
where $\mathcal{D}$ is the set of all dead ends and  $\mathcal{S}$ is the state space. 

When $\alpha > 0$, it has no impact on the best-first search as it does not change any 
behavior of our best-first search. However, dead end states alone the walk causes 
a failed walk. Therefore, given a dead end ratio $\alpha$, a walk of length $l$ has only $(1-\alpha)^{l}$
probability not hitting a dead end, i.e. , the probability of hitting an exit changes from $ \frac{E}{b^{l \times step}}$ to $(1-\alpha)^{l} \frac{E}{b^{l \times step}}$. This shows that for problem domains with frequent dead ends, random walk based algorithm is less favors. % due to the fact that. 


%in planning problem are not necessarily states that has no applicable actions. 
More often than not, they are states where goal is no longer reachable. Domains with stringent deadlines ( like Truck) or dead ends in DTGs (the DTG of a box in a sokoban problem) usually have dead ends. 
A best first search algorithm will discard these dead ends when encountering them. However, a random walk algorithm, 
without checking reachability until the end of $l$ steps, will encounter dead ends inevitably. 
Here we assume that the dead end ratio in the search space is $\alpha$. Hence, 
the expected heuristic evaluation of random walk is increased to $\frac{ B^{k} -1 }{(1-\alpha) (B^{l} -1 )}$. 
% Note that when there are all dead ends in this problem, random walk algorithm will halt at some point, in practice 

% {\bf Other ways of random walks}
% Therefore, MRW is more suitable for fully connected graph search problems with little deadends. 
% Use experimental results to verify that


% In practice, 
We can see from Lemma~\ref{lem:bfs} that 
the expected heuristic evaluation is 
$\Omega(b_{h}^{d})$ which the expected
heuristic evaluation using an unbiased
random walk is $ step \times \frac{b^{l \times step}}{E}$.
Letting $step \times l = d$, comparing with these two, we can see 
that for domains where $b_{h} \ge \frac{b}{E^{- step \times b}}$, 
namely, the effective branch factor $b_{h}$ is larger 
than $\frac{b}{E^{- step \times l}}$, %which can be interpreted as an efficient branch factor for random walk, 
a random walk procedure will have a lower 
number of expected heuristic evaluations to find 
an exit, and therefore performs faster. 
Intuitively, heuristic function can be diverged
from a desired direction while random search 
can explore state evenly. 


%%%%% TODO 


In the above discussion, we assumed that 
$step \times l = d $. In practice, we do not know $d$
during search. Instead, we have to try multiple 
possibilities. This shows the importance of using
multiple parameter settings for random walk instead of one. 
From Lemma~\ref{lem:rw}, we can also 
see that there must be an upper bound 
of $l \times step$ for random search to be useful, 
as $b^{ step \times} l $ increases exponentially 
when $step \times l $ increases, while $E$ is upper-bounded. 
Therefore, the expected number of heuristic evaluation 
will grow dramatically according to~\ref{lem:rw} if $step \times l$
is not bounded. This shows that we have to try different 
parameters while giving the maximum walking length 
in our random walk an upper limit. 



}
