\documentclass{article}
\usepackage{aaai}
\usepackage{graphicx}
\usepackage{rawfonts} 
\usepackage{amsthm} 
\usepackage{amsmath} 
\usepackage{mathtools}
\usepackage{amssymb}

\DeclareMathSizes{10}{9}{6}{6}
\renewcommand{\qedsymbol}{}
\usepackage[usenames]{color} % Only used in comment commands
\definecolor{Blue}{rgb}{0,0.16,0.90}
\definecolor{Red}{rgb}{0.90,0.16,0}
\definecolor{DarkBlue}{rgb}{0,0.08,0.45}
\definecolor{ChangedColor}{rgb}{0.9,0.08,0}
\definecolor{CommentColor}{rgb}{0.2,0.8,0.2}
\definecolor{ToDoColor}{rgb}{0.1,0.2,1}

% *** Use this definition of the command to show the comments ***
\newcommand{\todo}[1]{\textbf{\color{ToDoColor} TODO: #1}}
\newcommand{\changed}[0]{\textbf{\color{ChangedColor} Changed: }}
\newcommand{\hootan}[1]{\textbf{\color{CommentColor} /* #1  (hootan)*/}}
\newcommand{\martin}[1]{\textbf{\color{CommentColor} /* #1  (martin)*/}}
\newcommand{\commentout}[1]{}
\DeclareMathOperator*{\argmax}{arg\,max}

\newtheorem{mydef}{Definition}
\newtheorem{mythe}{Theorem}
\newtheorem{mylem}{Lemma}
\begin{document}

\title{A Theoretical Framework for Studying Random Walk Planning}
\author{Hootan Nakhost\\
University of Alberta, Edmonton, Canada\\
nakhost@ualberta.ca
\And 
Martin M\"uller\\
University of Alberta, Edmonton, Canada\\
mmueller@ualberta.ca
}

\maketitle

\begin{abstract}
Random walks are a relatively new component used in several state of the art satisficing planners.
Empirical results have been mixed: while the approach clearly outperforms more systematic search methods
such as weighted A* on many planning domains, it fails in many others. So far, the explanations for these
empirical results have been somewhat ad hoc.
This paper proposes a formal framework for comparing the performance of random walk and systematic search methods.
Fair homogenous graphs are
proposed as a graph class that 
represents characteristics of the state space of prototypical planning domains, 
and is simple enough to allow a theoretical analysis of the performance of both random walk 
and systematic search algorithms.
This gives well-founded insights into the relative strength and weaknesses of these approaches.
The close relation of the models to some well-known planning domains is shown through
simplified but semi-realistic planning domains that fulfill the constraints of the models.

One main result is that in contrast to systematic search methods, 
for which the branching factor plays a decisive role,
the performance of random walk methods is determined to a large degree by the Regress Factor, 
the ratio between the probabilities of progressing towards and regressing away from a goal
with an action. 
The performance of random walk and systematic search methods
can be compared
by considering both branching and regress factors of a state space. 
\end{abstract}

\section{Random Walks in Planning}
Random walks, which are paths through a search space that follow
successive randomized state transitions, 
are a main building block of prominent 
search algorithms such as Stochastic Local Search techniques 
for SAT \cite{selman:etal:aaai-92,Pham} and 
Monte Carlo Tree Search in game playing and puzzle solving
\cite{dave,Finnsson,DBLP:conf/ijcai/Cazenave09}. 

Inspired by these methods, several recent satisficing planners also
utilize random walk (RW) techniques. Identidem \cite{identidem} performs 
a hill climbing search that uses random walks to escape from plateaus or saddle points.
All visited states are evaluated using a heuristic function. 
Random walks are biased towards states with lower heuristic value. 
Roamer \cite{roamer} enhances its best-first search (BFS) with random walks,
aiming to escape from \textit{search plateaus} where the heuristic is uninformative. 

Arvand \cite{Arvand} takes a more radical approach:
it relies exclusively on a set of
random walks to determine the next state in its local search.
For efficiency, it only evaluates the endpoints of those random walks. 
Arvand also learns to bias its random walks towards more promising actions
over time, by using the techniques of \textit{Monte Carlo Deadlock Avoidance}
(MDA)
and \textit{Monte Carlo with Helpful Actions} (MHA).
In \cite{Nakhost2012a}, the local search of Arvand2 is 
enhanced by the technique of \textit{Smart Restarts},
and applied to solving Resource Constrained Planning (RCP) problems.
The hybrid \textit{Arvand-LS} system \cite{Xie2012a} combines random walks with 
a local greedy best first search.

Compared to all other tested planners, Arvand2 
performs much better in RCP problems \cite{Nakhost2012a}, 
which test the ability of planners in utilizing scarce resources.
In IPC domains, RW-based planners tend to excel on domains with 
many paths to the goal. For example, scaling studies in \cite{Xie2012a} show
that RW planners can solve much larger problem instances than other state of the art planners
in the domains of \textit{Transport}, \textit{Elevators}, \textit{Openstacks}, and \textit{Visitall}. 
%RW-based planners also perform very well in IPC domains such as \textit{Pipesworld}
%and \textit{Storage}, which traditionally are quite challenging for systematic search. 
However, the planners perform poorly in 
\textit{Sokoban}, \textit{Parking}, and \textit{Barman}, puzzles
with a small solution density in the search space.
%It is not completely clear whether this is caused by minor issues such as
%domain encodings, or whether it exposes fundamental limitations of random walk planning. 

While the success of RW methods in related research areas such as
SAT and Monte Carlo Tree Search
serves as
a good general motivation for trying them in planning, it does not provide
an explanation for why RW planners perform well.  
Previous work has highlighted three main
advantages of random walks for planning:

\begin{itemize}
\item 
Random walks are more effective than systematic search
approaches for escaping from regions where
heuristics provide no guidance \cite{identidem,Arvand,roamer}.
\item 
Increased sampling of the search space by random walks adds a beneficial
\textit{exploration} component to balance the \textit{exploitation} of the heuristic in planners \cite{Arvand}.  
\item  
Combined with proper \textit{restarting} mechanisms,
random walks can avoid most of the time
wasted by systematic search in dead ends. Through restarts, random walks can rapidly back out of 
unpromising search regions \cite{identidem,Nakhost2012a}. 
\end{itemize}

These explanations are intuitively appealing, 
and give a qualitative explanation for the observed behavior on planning benchmarks 
such as IPC and IPC-2011-LARGE \cite{Xie2012a}.
Typically, random walk planners are evaluated by measuring their coverage, 
runtime, or plan quality in such benchmarks. 

\subsection{Studying Random Walk Methods}
There are many feasible approaches for gaining a deeper understanding of these methods.

\begin{itemize}
\item
Scaling studies, as in Xie et al. \shortcite{Xie2012a}.
\item
Algorithms combining RW with other search methods, 
as in \cite{roamer,arvand_herd}.
\item
Experiments on small finite instances where 
it is possible to ``measure everything'' and compare the
choices made by different search algorithms.
\item
Direct measurements of the benefits
of RW, such as faster escape from plateaus of the heuristic.
\item
A theoretical analysis of how RW and other search algorithms behave on idealized 
classes of planning problems which are amenable to such analysis.
\end{itemize}

The current paper pursues the latter approach.
The main goal is a careful theoretical investigation of the first 
advantage claimed above - the question of how RW
manage to escape from plateaus faster than
other planning algorithms. 

\subsection{A First Motivating Example}
As an example, consider the following well-known plateau for
the FF heuristic, $h_{FF}$, discussed in \cite{Helmert04}. 
Recall that $h_{FF}$ estimates the goal distance by
solving a relaxed planning problem in which all the negative effects of actions are ignored. 
Consider a transportation domain in which trucks are used to move packages between $n$ locations
connected in a single chain $c_1,\cdots,c_n$.
The goal is to move one package from $c_n$ to $c_1$.
%\hootan{Do we need a picture?}
%\hootan{Do we need to explain why this is a plateau?}
Figure \ref{fig:transport} shows the results of a basic scaling experiment on this domain with $n=10$ locations,
varying the number of trucks $T$ from 1 to 20. All trucks start at $c_2$. 
The results compare basic
Monte Carlo Random Walks (MRW) from Arvand-2011 and basic Greedy Best First Search (GBFS) from LAMA-2011. 
Figure \ref{fig:transport} shows how the runtime of GBFS grows quickly 
with the number of trucks $T$
until it exceeds the memory limit of 64 GB. 
This is expected since the effective branching factor grows with $T$. However,
the increasing branching factor has only little effect on MRW: the runtime grows only linearly with $T$. 

\subsection{Choice of Basic Search Algorithms}

All the examples in this paper use state of the art implementations of
basic, unenhanced search methods.
GBFS as implemented in LAMA-2011 represents
systematic search methods, and the MRW implementation of Arvand-2011 
represents random walk methods.
Both programs use $h_{FF}$ for their evaluation.
All other enhancements such as preferred operators in LAMA and Arvand, multi-heuristic search in LAMA,
and MHA in Arvand are switched off. 

The reasons for selecting this setup are:
1. A focus on theoretical models that can explain the substantially different behavior of random walk and
systematic search methods. 
Using simple search methods allows a close alignment of experiments with theoretical results.
2. Enhancements may benefit both methods in different ways, or be only applicable to one method, so may
confuse the picture. 
3. A main goal here is to understand the behavior of these two search paradigms in regions
where there is a lack of guiding information, such as plateaus. Therefore, in some examples even
a blind heuristic is used. While enhancements can certainly have a great influence on search parameters
such as branching factor, regress factor, and search depth, 
the fundamental differences
in search behavior will likely persist across such variations.

\subsection{Contributions of this Paper}

\commentout{
Theorem \ref{thr:additivity} shows how the \textit{hitting time} in a graph
can be computed in terms of
the \textit{unit progress times}.

Two classes of graphs which model the search space of planning problems are proposed, 
in order to study the behaviour of search algorithms: 
\textit{Homogenous} and \textit{Strongly Homogenous} graphs. 
}

\textbf{Regress factor and goal distance for random walks:}
The key property introduced to analyze random walks is the 
\textit{regress factor} $\mathit{rf}$, 
the ratio of two probabilities: \textit{progressing} towards a goal and 
 \textit{regressing} away from it. 
Besides $\mathit{rf}$, the other key variable affecting the average runtime of basic random walks
on a graph is the \textit{largest goal distance} $D$ in the whole graph, 
which appears in the exponent of the expected runtime. 

\textbf{Homogenous graph model:}
In the \textit{homogenous graph} model, the regress factor
 of a node depends only on its goal distance. 
Theorem \ref{thr:FWH}  shows that the runtime of RW mainly depends on $\mathit{rf}$.
As an example, the state space of Gripper is close to a homogenous graph.

\textbf{Bounds for other graphs:}
Theorem \ref{thr:bound} extends the theory to compute
upper bounds on the hitting time for graphs which are
not homogeneous, but for which
bounds on the progress and regress chances are known.

\textbf{Strongly homogenous graph model:}
In \textit{strongly homogenous graphs}, almost all nodes 
share the same $\mathit{rf}$. 
Theorem \ref{thr:FH} explains how
\textit{rf} and $D$ affect the hitting time. 
A transport example is used for illustration.

\commentout{
For both models examples that relate the models to standard
planning benchmarks are given, and possible
ways to improve the basic random walks  are discussed.} 

\textbf{Model for Restarting Random Walks:}
For large values of $D$, \textit{restarting random walks} (RRW)
can offer a substantial performance advantage. At each search step,
with probability $r$ a RRW restarts from a fixed initial state $s$. 
Theorem \ref{thr:IRH_BOUND} proves that the expected runtime of RRW
depends only on the goal distance of $s$, not on $D$. 

\begin{figure}
\centering
\includegraphics[width=0.47\textwidth ]{RESOURCES/transport.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:transport} Average runtime of GBFS and MRW 
  varying the number of trucks (x-axis) in Transport domain. 
  Missing data means memory limit exceeded.}
\vspace{-0.2cm}
\end{figure}

\section{Background and Notation}
Notation follows standard references such as \cite{Norris}.
Throughout the paper the notation $P(e)$ denotes the 
probability of an event $e$ occuring, 
 $G=(V,E)$ is a directed graph, and $u,v \in V$ are vertices.

\begin{mydef}[Markov Chain]
The discrete-time random process $X_0, \dots, X_N$ defined over a set of states $S$ is $Markov(S, \mathbb{P})$
iff $P(X_n = j_n | X_{n-1} = j_{n-1}, \dots, X_{0} = j_{0}) = P(X_n = j_n | X_{n-1} = j_{n-1})$. 
The matrix $\mathbb{P} (p_{ij})$
where $p_{ij} = P(X_n = j_n | X_{n-1} = i_{n-1})$ are the transition probabilities of the chain. 
In time-homogenous Markov chains as used in this paper, $\mathbb{P}$ does not depend on $n$.
\end{mydef}

\begin{mydef}[Distance $d_G$]
$d_G(u,v)$ is the length of a shortest path from $u$ to $v$ in $G$. 
The distance $d_G(v)$ of a \textit{single} vertex $v$ is the length
of a longest shortest path from a node in $G$ to $v$: 
$d_G(v)=max_{x \in V} d_G(x, v)$.
\end{mydef}

\begin{mydef}[Successors]
The \textit{successors} of $u \in V$ is the set of all vertices
in distance 1 of $u$:  \\
$S_G(u)=\{v | v \in V \wedge d_G(u,v) = 1\}$.
\end{mydef}

\begin{mydef}[Random Walk]
A random walk on $G$ is a Markov chain $Markov(V, \mathbb{P})$
where
$p_{uv} = \frac{1}{|S_G(u)|}$ if $(u,v) \in E$,
and $p_{uv} = 0$ if $(u,v) \notin E$.
\end{mydef}

The \textit{restarting random walk} model used here is a random walk which
\textit{restarts} from a fixed initial
state $s$ with probability $r$ at each step,
and uniformly randomly chooses among neighbour states with probability $1-r$. 

\begin{mydef}[Restarting Random Walk]
Let $s \in V$ be the initial state, and $ r \in [0, 1]$. 
A restarting random walk $RRW(G, s, r)$
is a Markov chain $M_G$ with states $V$ and transition probabilities
$p_{uv}$:  
\begin{align*}
 p_{uv}=  
  \begin{dcases}
   \frac{1-r}{|S_G(u)|}&  \text{if } (u,v) \in E, v \neq s \\
   r + \frac{1-r}{|S_G(u)|}&  \text{if } (u,v) \in E, v = s \\ 
   0 &  \text{if } (u,v) \notin E, v \neq s \\ 
   r &  \text{if } (u,v) \notin E, v = s \\ 
  \end{dcases} 
\end{align*}
\end{mydef}
A RW is the special case of RRW with $r=0$.
\begin{mydef}[Hitting Time]
Let $M=X_0, X_1, \dots, X_N$ be $Markov(S, \mathbb{P})$, and $u,v \in S$. 
Let $H_{uv} = min\{t \geq 1 : X_t=v \wedge X_0 = u\}$.
Then the hitting time $h_{uv}$ is the expected number 
of steps in a random walk on $G$ starting from $u$ 
which reaches $v$ for the first time: $h_{uv} = E[H_{uv}]$. 
\end{mydef}

\begin{mydef}[Unit Progress Time]
The unit progress time $u_{uv}$ is the expected number of steps in a random
walk after reaching $u$ for the first time until it first gets closer to $v$.
Let $R=RRW(G, s, r)$. 
Let $U_{uv} = min\{t \geq H_{su} : d_G(X_t, v) = d_G(u, v) - 1\}$. 
Then $u_{uv}=E[U_{uv}]$.
\end{mydef}

\begin{mydef}[Progress, Regress and Stalling Chance; Regress Factor]
Let $X: V \rightarrow V$ be a random variable
with the following probability mass function: 
\begin{align}
\label{eq:homos}
P(X(u) = v) =
  \begin{dcases}
  \frac{1}{|S_G(u)|}  &\text{if }  (u, v) \in E \\
  0   &\text{if } (u, v)  \notin E \\
  \end{dcases}
\end{align}

\noindent Let $X_u$ be short for $X(u)$.
The progress chance $pc(u,v)$,
regress chance $rc(u,v)$, and
stalling chance $sc(u,v)$
of $u$ regarding $v$, are respectively:
the probabilities of getting closer, further away,
or staying at the same distance to $v$ after one random step at $u$.
\begin{align*}
pc(u,v) &= P(d_G(X_u, v) = d_G(u, v)-1) \\
rc(u,v) &= P(d_G(X_u, v) = d_G(u, v)+1)\\ 
sc(u,v) &= P(d_G(X_u, v) = d_G(u, v)) 
\end{align*}

In a Markov Chain, the probability transitions play a key role in 
determining the hitting time. In all the models considered here, 
the movement in the chain corresponds to moving between different goal distances.
Therefore it is natural to choose progress and regress chances as the main properties. 

\noindent The regress factor of $u$ regarding $v$ is $\textit{rf}(u,v)=\frac{rc(u,v)}{pc(u,v)}$ 
if $pc(u,v) \neq 0$, and undefined otherwise.
\end{mydef}

\begin{mythe} \label{thr:linearity} \cite{Norris}
Let $M$ be $Markov(V,\mathbb{P})$. Then for all $u,v \in V$, $h_{uv} = 1+ \sum_{x \in V}p_{ux} h_{xv}$.
\end{mythe}

\begin{mythe}\label{thr:additivity}
Let $s \in V$, $D=d_G(u,v)$, $R=RRW(G, s, r)$, $V_d=\{x : x \in V \wedge d_G(x,v) = d \}$,  and $P_{d}(x)$ be the probability of $x$ being the first node in $V_d$
reached by $R$. Then the hitting time $h_{uv} = \sum_{d=1}^D\sum_{x \in V_d}P_{d}(x)u_{xv}$.
 
\end{mythe}

\begin{proof}
Let $H_{uv}$ and $X_d$ be two random variables respectively denoting the length 
of a RRW that starts from $u$ and ends in $v$ for the first time, and 
the first vertex $x \in V_d$ reached by $R$. 
Then
\begin{align}
H_{uv}= \sum_{d=1}^D\sum_{x\in V_d} 1_{\{X_d\}}(x)U_{xv}
\end{align}
where $U_{xv}$ is a random variable measuring the length 
of the fragment of the walk starting from $x$ and ending in a smaller goal distance
for the first time, and $1_{\{X_d\}}(x)$
is an indicator random variable which returns 1 if $X_d=x$ and 0 if $X_d\neq x$.
Since random variables $x$ and $U_{xv}$ are independent,
\begin{align}
E[H_{uv}] &= \sum_{d=1}^D\sum_{x\in V_d} E[1_{\{X_d\}}(x)]E[U_{xv}] \nonumber \\
h_{uv} &= \sum_{d=1}^D\sum_{x\in V_d} P_{d}(x) u_{xv}  \nonumber 
\end{align}
\end{proof}




\subsection{Heuristic Functions, Plateaus, Exit Points and Exit Time}
What is the connection between the models introduced here and 
plateaus in planning?  
Using the notation of \cite{SLS},  
let the heuristic value $h(u)$ of vertex $u$ be the estimated length of a shortest path from $u$
to a goal vertex $v$. A \textit{plateau} $P \subseteq V$ 
is a connected subset of states which share the same heuristic value $h_P$.
A state $s$ is an \textit{exit point} of $P$ if $s \in S_G(p)$ for some $p \in P$,
and $h(s) < h_P$. The \textit{exit time} of a 
random walk on a plateau $P$ is the expected number
of steps in the random walk until it first reaches an exit point. 
The problem of finding an exit point in a plateau is equivalent to the problem
of finding a goal in the graph consisting of $P$ plus all its exit points,
where the exit points are goal states.
The expected exit time from the plateau equals the hitting time of this problem.

\commentout{
In the corresponding graph $G(V, E)$ of a plateau $P$, there is a one to one relation between $V$ and the states
on the plateau, and for $u,v \in V$, $(u,v) \in E$ iff there is an action from the state represented by $u$
to the state represented by $v$. Now if the exit point(s) 
in a plateau is defined as the goal(s) of a RW in the corresponding
graph, then the hitting time in the graph equals the exit time in the plateau.}

\section{Fair Homogenous Graphs}
A fair homogeneous (FH) graph $G$ is the main state space model introduced here. 
\textit{Homogenuity} means that both progress and regress
chances are constant for all nodes at the same goal distance. \textit{Fairness} means that 
an action can change the goal distance by at most one.
%\todo{throughout the paper rename $v$ to $g$}

\begin{mydef}[Homogenous Graph]
For $v \in V$, $G$ is $v$-homogeneous iff 
there exist two real functions 
$pc_G(x, d)$ and $rc_G(x, d)$,
mapping $V\times \{0, 1, \dots, d_G(v)\}$ to the range $[0, 1]$,
such that for any two vertices $u,x \in V$ 
with $d_G(u,v)=d_G(x,v)$ the following two conditions hold:
\begin{enumerate}
\item If $d_G(u,v) \neq 0$, then\\ $pc_G(u,v) = pc_G(x,v) = pc_G(v, d_G(u,v))$.
\item $rc_G(u,v) = rc_G(x,v) = rc_G(v, d_G(u,v))$.
\end{enumerate}
G is homogeneous iff it is $v$-homogeneous for all $v \in V$.  
$pc_G(x, d)$ and $rc_G(x,d)$ are called
 progress chance and regress chance of $G$ regarding $x$. 
 The regress factor of $G$ regarding $x$ is defined by $\textit{rf}_G(x,d)=rc_G(x,d)/pc_G(x,d)$.
\end{mydef}

\begin{mydef}[Fair Graph]
$G$ is fair for $v \in V$ iff for all $u \in V$, $pc(u,v)+rc(u,v)+sc(u,v) = 1$. $G$
is fair if it is fair for all $v \in V$. 
\end{mydef}

\begin{mylem} \label{lem:IRH}
Let $G=(V,E)$ be FH and $v \in V$.  
Then for all $x \in V$, $h_{xv}$ depends only on the goal distance
$d=d_G(x,v)$, not on the specific choice of $x$, so
$h_{xv}=h_d$.
\end{mylem}

\begin{proof}
This lemma holds for both RW and RRW. 
The proof for RRW is omitted for lack of space.
Let $p_d=pc_G(v,d)$, $q_d=rc_G(v,d)$, $c_d=sc_G(v,d)$, $D=d_G(v)$, 
and $V_d = \{x : x \in V \wedge d_G(x,v)=d \}$. 
The first of two proof steps shows that for all $x \in V_d$, $u_{xv}=u_{d}$.

Let $I_x(d)$ be the number of times a random walk starting from $x \in V_d$ visits a state with
goal distance $d$ before first reaching the goal distance $d-1$, and let $J_x(d)$ be the number 
of steps between two consecutive such visits. 
Then, $u_{xv} =E[I_x(d) \times J_x(d) + 1]$.
Claim: both $I_x(d)$ and $J_x(d)$ are independent of the specific choice of $x \in V_d$,
so $I_x(d) = I(d)$ and $J_x(d) = J(d)$. This implies
$u_{xv} =E[I_x(d) \times J_x(d) + 1] = E[I(d) \times J(d) + 1]$
independent of the choice of $x$, so $u_{xv}=u_{d}$.

First, the progress chance for all $x \in V_d$ is $p_d$, therefore
$E[I_x(d)]=\frac{1}{p_d} = I(d)$, the expected value
of a geometric distribution with the success probability $p_d$.

Second, $E[J_x(d)] = J(d)$ and therefore
$u_{xv}=u_{d}$ are shown by downward induction for $d = D,\cdots,1$.
For the base case $d=D$, since the random walk can only stall between visits, 
$E[J_x(D)] = J(D) = 1$.
Now assume the claims about $J$ and $u$ hold for $d+1$, so
for all $x' \in V_{d+1}$, $E[J_{x'}(d+1)] = J(d+1)$ and $u_{x'} = u_{d+1}$.
Call the last step at distance $d$, before progressing to $d-1$, a \textit{successful $d$-visit}, and all previous
visits, which do not immediately proceed to $d-1$, \textit{unsuccessful  $d$-visits}.
After an unsuccessful $d$-visit,
a random walk starting at any $x \in V_d$
stalls at distance $d$ with probability $c_d$,
and transitions to a node with distance $d+1$ with probability $q_d$,
after which it reaches distance $d$ again
after an expected $u_{d+1}$ steps. Therefore,
$$
E[J_x(d)] = \frac{\left(c_d + q_d(u_{d+1}+1) \right)}{1-p_d} = J(d)
$$
independent of $x$.
As the second proof step, the lemma now follows from Theorem \ref{thr:additivity}:
\begin{eqnarray} 
h_{xv} = \sum_{d=1}^{d_G(x,v)}\sum_{k \in V_d}P_{d}(k)u_{kv} = \sum_{d=1}^{d_G(x,v)}u_d = h_d \label{eq:chain}
\end{eqnarray}
\end{proof}


%\begin{mylem} \label{lem:FWH}
%Let $G=(V,E)$ be fair weakly homogenous.
%Then, for all $v,x,x' \in V$  with $d_G(x,v)=d_G(x',v)=d$, $h_{xv}=h_{x'v}$.
%\end{mylem}
%
%
%\begin{proof}
%Let $D=d_G(v)$, $p_i = pc_G(v,i)$ and $q_i=rc_G(v,i)$.
%For any $v' \in V$, let $u_{v'v}$ be the expected number of steps in a random
%walk that starts from $v'$ until it progresses (by one) towards $v$ for the first time. 
%By induction on $d$, we show that $u_{xv}=u_{x'v}=u_d$, where 
%\begin{align}
%\label{eq:unit}
%u_d=
%  \begin{dcases}
%  \frac{q_d}{p_d}u_{d+1}+\frac{1}{p_d} &\text{if }  d < D  \\
%  \frac{1}{p_d} &  \text{if } d = D \\
%  \end{dcases}
%\end{align}
%
%For $d=D$, before reaching the goal distance $D-1$ 
%the random walk is always at a state with goal distance $D$.
%The probability of progressing towards
%$v$ is $p_D$, so on average $\frac{1}{p_D}$ steps are needed
%to progress, and $u_{xv}=u_{x'v}= \frac{1}{p_D}=u_d$.
%
%Suppose the lemma holds for $d+1$. To show that it also holds for $d$,
%let $X$, $Y$, and $Z$ be three random variables that respectively 
%measure the number of times the random walk visits a state with
%goal distance $d$ before reaching the goal distance $d-1$, the number of steps between two consecutive such visits, 
%and the total length of the random walk. Then, $Z=X \times Y + 1$. 
%
%Call the last step at distance $d$, before progressing to $d-1$, a \textit{successful $d$-visit}, and all previous
%visits, which do not immediately after reach $d-1$, \textit{unsuccessful  $d$-visits}.
%After each unsuccessful  $d$-visit, the random walk
%transitions to distance $d+1$ with probability $\frac{q_d}{1-p_d}$ and stalls at distance $d$
%with probability $\frac{1-q_d-p_d}{1-p_d}$.
%In the former case, the random walk performs an expected $u_{d+1}$ steps 
%until it returns to distance $d$. Therefore, 
%$E[Y] =(\frac{q_d}{1-p_d}) (u_{d+1} + 1)+ (\frac{1-q_d-p_d}{1-p_d}) 1$. 
%The probability of transitioning from $d$ to $d-1$ is $p_d$, so $E[X]=\frac{1}{p_d}$.
%Since $X$ and $Y$ are independent, $E[Z]=E[X]\times E[Y] + 1$, so:
%
%\begin{eqnarray}
%u_{xv} &=& E[Z] = 1+\left(\frac{q_d(u_{d-1} + 1)}{1-p_d} + \frac{1-q_d-p_d}{1-p_d} \right) (\frac{1}{p_d})      \nonumber \\
%&=& \frac{q_du_{d-1}}{p_d} +\frac{1}{p_d} = u_d   \nonumber 
%\end{eqnarray}
%
%The expected hitting time $h_{xv}$ is the sum of the expected number
%of steps in each unit progression from $x$ towards $v$: 
%\begin{eqnarray} 
%h_{xv}= \sum_{1 \leq i \leq d} u_i = h_{x'v} \label{eq:chain}
%\end{eqnarray}
%
%\end{proof}
%


%\begin{proof}
%Let $D=d_G(v)$.
%For any two vertices $k, k' \in V$ let $f_{kk'}$ be the expected number of steps in a random
%walk that starts from $k$ and reaches for the first time a vertex $k''$ such that $d(k'',v)=d(k,v)-1$.
%By induction on $d$, we show that $f_{xv}=f_{x'v}=u_d$, where for $d < D$
%\begin{eqnarray} 
%u_d = \frac{q_d}{p_d}u_{d+1}+\frac{1}{p_d} \nonumber  
%\end{eqnarray}
%and
%\begin{eqnarray} 
%u_D = \frac{1}{p_D}   \nonumber 
%\end{eqnarray}
%
%For $d=D$, we have $d_G(x,v)=D$, therefore \\ 
%
%\begin{eqnarray} 
%f_{xv} = \sum_{0 \leq N}(1-p_D)^{N-1}p_D = \frac{1}{p_D} = u_D.   \nonumber  
%\end{eqnarray}
%
%Now suppose that Lemma \ref{lem:weak} holds for $d+1$, we wish to show that it also holds for $d$.
%Let $X$, $Y$, and $Z$ be three random variables that respectively determine the number of times the random walk visits a state with
%goal distance $d$ before reaching the goal distance $d-1$, the number of steps between two consecutive such visits, 
%and the total length of the random walk. Then, 
%
%\begin{eqnarray} 
%Z=X \times Y.
%\end{eqnarray}
%Since, $X$ and $Y$ are independent:
%\begin{eqnarray}
%E[Z]&=&E[X]*E[Y]. \nonumber \\
%\end{eqnarray}
%After each visit of the goal distance $d$, except the last one, after which the random walk reaches
%the goal distance $d-1$, the random walk either with probability $\frac{q_d}{1-p_d}$
%transitions to a vertex $v'$ with the goal distance $d+1$  or with probability $\frac{1-q_d-p_d}{1-p_d}$ 
%transitions to a vertex with goal distance $d$.
%In the former case, the random walk on average performs $f_{v'v} =u_{d-1}$ steps until it returns to a vertex with goal distance $d$.
%
%\begin{eqnarray}
%E[Y] &=&\left(\frac{q_d(u_{d-1} + 1)}{1-p_d} + \frac{1-q_d-p_d}{1-p_d} \right)     \nonumber \\
%f_{xv} =E[Z] &=& 1+\left(\frac{q_d(u_{d-1} + 1)}{1-p_d} + \frac{1-q_d-p_d}{1-p_d} \right) \sum_{0 \leq N}N(1-p_d)^Np_d      \nonumber \\
%&=& 1+ (q_du_{d-1} +1-p_d)\sum_{0 \leq N}N(1-p_d)^{N-1}p_d      \nonumber \\
%&=& 1+ (q_du_{d-1} +1-p_d)(\frac{1}{p_d})   \nonumber \\
%&=& \frac{q_du_{d-1}}{p_d} +\frac{1}{p_d}   \nonumber \\
%&=& u_d   \nonumber 
%\end{eqnarray}
%
%Finally,  
%\begin{eqnarray} 
%h_{xv}= \sum_{1 \leq i \leq d} u_i = h_{x'v} \label{eq:chain}
%\end{eqnarray}
%
%\end{proof}


\begin{mythe} \label{thr:FWH}
Let $G=(V,E)$ be FH, $v \in V$,
$p_i = pc_G(v,i)$, $q_i=rc_G(v,i)$, and $d_G(v) = D$.
Then for all $x \in V$, 
%\begin{equation} 
%\begin{split}
%h_{xv} = &  \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{d_G(v)-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_{d_G(v)}} +  \right. \\  
%& \left. \sum^{d_G(v)-1}_{j=d}\left(\frac{1}{p_j}\prod^d_{i=j+1}\frac{q_i}{p_i}\right)\right) \nonumber
%\end{split}
%\end{equation}
\begin{eqnarray} 
%h_{xv} = \sum_{1 \leq d \leq d_G(x,v)}  \left(\frac{1}{p_D}\prod^{D-1}_{i=d} \frac{q_i}{p_i}+ \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)\right) \nonumber
h_{xv} = \sum_{d=1}^{d_G(x,v)}  \left(\beta_D \prod^{D-1}_{i=d} \lambda_i+ \sum^{D-1}_{j=d}\left(\beta_j\prod^{j-1}_{i=d}\lambda_i\right)\right) \nonumber
\end{eqnarray}
\end{mythe}
\noindent where for all $1 \leq d \leq D$, $\lambda_d = \frac{q_d}{p_d}$, and $\beta_d = \frac{1}{p_d}$.
%\begin{mythe} \label{thr:main}
%Let $G=(V,E)$ be a fair weakly homogeneous graph. Let $v \in V$ be a vertex.
%Let $p_i = pc_G(v,i)$ and $q_i=rc_G(v,i)$. 
%For any vertex $x$, 
%\begin{eqnarray} 
%h_{xv} = \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{d_G(v)-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_{d_G(v)}} + \sum^{d_G(v)-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)\right) \nonumber
%\end{eqnarray}
%
%\end{mythe}

\begin{proof}
According to Lemma \ref{lem:IRH} and Theorem \ref{thr:linearity},
\begin{eqnarray} 
h_0 & = & 0 \nonumber \\
h_d & = & p_d h_{d-1} + q_d  h_{d+1} + c_d  h_{d} + 1 \quad (0 < d< D) \nonumber \\
h_D & = & p_D h_{D-1} + (1-p_D) h_D + 1 \nonumber
\end{eqnarray}
Let $u_d = h_d - h_{d-1}$, then
\begin{eqnarray} 
u_d & = & \lambda_d u_{d+1} + \beta_d \quad (0 < d< D) \nonumber \\
u_D & = & \beta_D  \nonumber
\end{eqnarray}


By induction on $d$, for $d<D$ 
\begin{eqnarray} 
u_d=\beta_D\prod^{D-1}_{i=d} \lambda_i + \sum^{D-1}_{j=d}\left(\beta_j\prod^{j-1}_{i=d}\lambda_i\right) \label{eq:distance} 
\end{eqnarray}
This is trivial for $d=D-1$. Assume that Equation \ref{eq:distance} holds for $d+1$. 
Then by Equation \ref{eq:chain} for $h_{xv}$,
\begin{eqnarray} 
u_d &=& \lambda_d\left(\beta_D\prod^{D-1}_{i=d+1} \lambda_i + \sum^{D-1}_{j=d+1}\left(\beta_j\prod^{j-1}_{i=d+1}\lambda_i\right)\right)+\beta_d  \nonumber  \\
&=& \beta_D\prod^{D-1}_{i=d} \lambda_i + \lambda_d \sum^{D-1}_{j=d+1}\left(\beta_j\prod^{j-1}_{i=d+1}\lambda_i\right)+\beta_d     \nonumber \\
&=& \beta_D\prod^{D-1}_{i=d} \lambda_i +  \sum^{D-1}_{j=d+1}\left(\beta_j\prod^{j-1}_{i=d}\lambda_i\right)+\beta_d \prod^{d-1}_{i=d} \lambda_i      \nonumber \\
&=& \beta_D\prod^{D-1}_{i=d} \lambda_i +  \sum^{D-1}_{j=d}\left(\beta_j\prod^{j-1}_{i=d}\lambda_i\right)      \nonumber \\
%\end{eqnarray}
%\begin{eqnarray} 
h_{xv} &=& \sum_{d=1}^{d_G(x,v)}  \left(\beta_D \prod^{D-1}_{i=d} \lambda_i+ \sum^{D-1}_{j=d}\left(\beta_j\prod^{j-1}_{i=d}\lambda_i\right)\right) \nonumber
%h_{xv} = \sum_{d=1}^{d_G(x,v)}  \left(\beta_D \prod^{D-1}_{i=d} \lambda_i+ \sum^{D-1}_{j=d}\left(\beta_j\prod^{j-1}_{i=d}\lambda_i\right)\right) \nonumber
%h_{xv} = \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)\right) \nonumber
\end{eqnarray}
\end{proof}

The largest goal distance $D$ and the regress factors $\lambda_i = q_i/p_i$ 
are the main determining factors
for the expected runtime of random walks in homogenous graphs. 
%This generalizes the earlier result for homogenous graphs.

\subsection{Example domain: One-handed Gripper}

\begin{table}[tp]%
\centering
\begin{tabular}[t]{ clcccc }
Robot   &  Gripper & $\mathit{pc}$ & $\mathit{rc}$ & $\mathit{rf}$ & $\mathit{b}$\\
\vspace{0.1cm}
$A$   &  full       & $\frac{1}{2}$ & $\frac{1}{2}$ & 1 & 1 \\
\vspace{0.1cm}
$A$   & empty & $\frac{|A|}{|A| + 1}$ & $\frac{1}{|A| + 1}$ & \boldmath $\frac{1}{|A|}$ & \boldmath $|A|$\\
\vspace{0.1cm}
$B$   &  full       & $\frac{1}{2}$ & $\frac{1}{2}$ & 1& 1\\
\vspace{0.1cm}
$B$   &  empty & $\frac{1}{|B| + 1}$ & $\frac{|B|}{|B| + 1}$ & $|B|$ & $|B|$\\
\end{tabular}
\caption{Random walks in One-handed Gripper. 
$|A|$ and $|B|$ denote the number of balls in A and B.}
\label{table:gripper}
\end{table}

Consider a one-handed gripper
domain, where a robot must move $n$ balls from room A to B
by using the actions of
picking up a ball, dropping its single ball, or moving to the other room. 
The highly symmetrical search space is FH.
The goal distance determines the distribution of
balls in the rooms as well as robot location and gripper status
as shown in Table \ref{table:gripper}. 
The graph is fair since no action changes the goal distance by more than one.
The expected hitting time is given by Theorem \ref{thr:FWH}. 

\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{RESOURCES/gripper.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:gripper} The average number of generated states varying the number of balls (x-axis) in Gripper domain.}
\vspace{-0.2cm}
\end{figure}

Figure \ref{fig:gripper} plots the predictions of Theorem \ref{thr:FWH} together with
the results of a scaling experiment, varying $n$ for both random walks and greedy best first search. 
To simulate the behaviour of both algorithms in plateaus with a lack
of heuristic guidance, a blind heuristic is used
which returns 0 for the goal and 1 otherwise. 
Search stops at a state with a heuristic value lower than
that of the initial state. Because of the blind heuristic, the only such state is the goal state. 
The prediction matches the experimental results extremely well. 
Random walks outperform greedy best first search.
The regress factor $\mathit{rf}$ never exceeds $b$,
and is significantly smaller in states with the robot at $A$ and an empty gripper -
almost one quarter of all states.

\subsection{Biased Action Selection for Random Walks}
Regress factors can be changed by biasing the action selection in the random walk. 
It seems natural to first select an action type
uniformly randomly, then ground the chosen action.
In gripper, this means choosing among the balls in the same room
in case of the pick up action. 

With this biased selection, the search space becomes fair homogenous with $q=p=\frac{1}{2}$. 
The experimental results and theoretical prediction for
such walks are included in Figure \ref{fig:gripper}. The hitting time 
grows only linearly with $n$. It is interesting that this natural
way of biasing random walks
is able to exploit the symmetry inherent in the gripper domain. 

\section{Extension to Bounds for Other Graphs}
While many planning problems cannot be exactly modelled as 
FH graphs, these models
can still be used to obtain upper bounds on the hitting time in any fair graph $G$
which models a plateau. 
Consider a corresponding FH graph $G'$ 
with progress and regress chances at each goal distance $d$ respectively set to 
the minimum and maximum progress and regress chances over all nodes at goal distance $d$ in $G$.
Then the hitting times for $G'$ %\martin{you had G here but I think it is G'} 
will be an upper bound for the hitting times in $G$.
In $G'$, progressing towards the goal is at most as probable as in $G$.

%\hootan{The next theorem is stated for weakly IRH, change it for H graphs}
\begin{mythe} \label{thr:bound}
Let $G=(V,E)$ be a directed graph, $s, v \in V$, $R=RRW(G, s, r)$, and $D=d_G(v)$. 
Let $p_{min}(d)$ and $q_{max}(d)$ be 
the minimum progress and maximum regress chance among all nodes at distance $d$ of $v$. 
Let $G'=(V', E')$ be an FH graph, $v', s' \in V'$, $d_{G'}(v')=D$, $R'=RRW(G', s', r)$, 
$pc_{G'}(v',d)=p_{min}(d)$, $rc_{G'}(d) = q_{max}(d)$,
and $sc_{G'}(d) = 1-p_{min}(d) - q_{max}(d)$. 
Then the hitting time of $R'$, $h_{s'v'}$, is a lower bound 
for the hitting time of $R$, $h_{sv}$, i.e., $h_{sv} \leq h'_{s'v'}$ if $d_G(s, v) = d_{G'}(s', v')$.
\end{mythe}

\begin{proof}
%\todo{First give a high level description of the proof}
%\hootan{This theorem is very intuitive but I couldn't give a simpler proof. However we might get away with much simpler
%proof using more English and intuitive explanations.}

%Let $S= X_D, X_{D-1}, \dots, X_1$ be a sequence of random variables where $X_n$, $1 \leq n \leq D$, denotes the first
%vertex reached by $R$ at the goal distance $n$. Since 
%\begin{eqnarray}
%P(X_n = i_n | X_{n+1} = i_{n+1}, \dots, X_{D} = i_{D}) = \nonumber \\
%P(X_n = i_n | X_{n+1} = i_{n+1}) \nonumber 
%\end{eqnarray}
%$S$ is a Markov process. Let $R(S)$ be the set of all the possible sequences. Since is a Markov process, 
%the expected length of the random walk given a sequence $I \in R(S)$, $H_I =  \sum_{n=1}^D u(i_n)$ where $i_n$ is the value of $X_n$ in $I$, and $u(i_n)$ is the expected unit progress time for the vertex $i_n$. Furthermore $h_{sv} = \sum_{I \in R(S)} P(I) H_I$. 
Again, for space reasons only the case $r=0$ is shown. 
Let $V_d = \{x | x \in V \wedge d_G(x,v)=d \}$, and 
assume for all $x \in V_d$, $u_{xv} \leq u'_d$ where $u'_d$ is the unit progress time at distance $d$ of $v'$. 
According to Theorem \ref{thr:additivity},
\begin{eqnarray}
h_{sv} &=& \sum_{d=1}^{d_{G}(s,v)}\sum_{k\in V_d} P_{d}(x) u_{kv}  \leq \sum_{d=1}^{d_{G'}(s',v')} u'_{d} \leq h'_{d} \nonumber 
\end{eqnarray}
To prove $u_{xv} \leq u'_d$ by induction, 
assume for all $x' \in V_{d+1}$, $u_{x'v} \leq u'_{d+1}$. Then
$u_{xv} \leq q_x (u_{d+1}+u_{Iv}) + (1-p_x-q_x) u_{Jv} + 1$,
where $I$ and $J$ are random variables defined
over $V_d$, and $p_x$ and $q_x$ denote the progress and regress chances of $x$. 
Let $m = \argmax_{i \in V_d}(u_{iv})$.
Then, 
\begin{eqnarray}
u_{mv} &\leq& q_m (u'_{d+1}+u_{mv}) + (1-p_m-q_m) u_{mv} + 1 \nonumber \\
u_{mv} &\leq& \frac{q_m}{p_m} u'_{d+1} + \frac{1}{p_m} \leq \frac{q_{max}(d)}{p_{min}(d)} u_{d+1} + \frac{1}{p_{min}(d)} \leq u'_d \nonumber
\end{eqnarray}
Analogously, for the base case $d=D$, for all $x \in V_D$
\begin{eqnarray}
u_{mv} &\leq& \frac{1}{p_m} \leq \frac{1}{p_{min}(d)} \leq u'_d \nonumber 
\end{eqnarray}
\end{proof}

\section{Fair Strongly Homogeneous Graphs}
A fair strongly homogenous (FSH) graph $G$ is a FH graph in which
$pc$ and $rc$ are constant for all nodes. FSH graphs are simpler
to study and suffice to explain the main properties of FH graphs. Therefore, this model is used to discuss
key issues such as dependency of the hitting time on largest goal distance $D$ and 
the regress factors. 

\begin{mydef}[Strongly Homogeneous Graph]
Given $v \in V$,
$G$ is strongly $v$-homogeneous iff there exist two real functions $pc_G(x)$ and $rc_G(x)$ 
with domain $V$ and range $[0, 1]$ such that for any vertex $u \in V$ the following
two conditions hold:
\begin{enumerate}
%\item $sc(u,v) = sc_G(v)$.
\item If $u \neq v$ then $pc(u,v) = pc_G(v)$.
\item If $d(u,v) < d_G(v)$ then $rc(u,v) = rc_G(v)$. 
\end{enumerate}
G is strongly homogeneous iff it is strongly $v$-homogeneous for all $v \in V$.  
The functions $pc_G(x)$ and $rc_G(x)$ are respectively called
the progress and the regress chance of $G$ regarding $x$. 
The regress factor of $G$ regarding $x$ is defined by $\textit{rf}_G(x)=rc_G(x)/pc_G(x)$.
\end{mydef}

\commentout{
\begin{mylem}\label{lem:FH}
Let $G=(V,E)$ be an FH graph. For all $x, v \in G$, $h_{xv}=h_d$
where $d_G(x,v)=d$. 
\end{mylem}

\begin{proof} 
Let $p=pc_G(v)$, $q=rc_G(v)$, and $c=1-p-q$. Let $P(i,x)$ be the probability of a RW of length $i$ that starts from $u$ and ends in $v$:
\begin{eqnarray}
P(i,x)&=&\sum_{0 \leq k \leq i} {i \choose{k+d, k, i-2k-d}} p^{k+d} q^{k} c^{i-2k-d} \nonumber \\
	&=&P_d(i) \nonumber 	
\end{eqnarray}
Let $PF(i, x)$ be the probability of a RW of length $i$ that starts from $u$ and ends in $v$ for the first time. 
\begin{eqnarray}
PF(i,x)&=&P(i, x) \prod_{j=1}^{i-1}(1-P(j, x)) \nonumber \\
	&=&P_d(i) \prod_{j=1}^{i-1}(1-P_d(j)) = PF_d(i) \nonumber 	
\end{eqnarray}
Then, 
\begin{eqnarray} 
h_{xv}&=& \sum_{0 \leq i} i PF(i, x) 
	  = \sum_{0 \leq i} i PF_d(i) 
	  = h_{d}   \nonumber 
\end{eqnarray}
%The notation $h_d$ denotes the hitting time of a random walk starting from a node in the distance $d$ of
%the goal in an FH graph.
\end{proof}
}
\begin{mythe} \label{thr:FH}

For $u,v \in V$, let 
$p=pc_G(v) \neq 0$, $q=rc_G(v)$, $c=1-p-q$, $D = d_G(v)$, and $d=d_G(u,v)$.
Then the hitting time $h_{uv}$ is: 
\begin{align}
\label{eq:FH}
h_{uv}=
  \begin{dcases}
   \beta_0\left(\lambda^{D}-\lambda^{D - d}\right) + \beta_1d&\text{if }  q \neq p \\
   \alpha_0(d- d^2) + \alpha_1Dd  &  \text{if } q =p
  \end{dcases}
\end{align}
where $\lambda = \frac{q}{p}$, $\beta_0 = \frac{q}{(p-q)^{2}}$, $\beta_1 = \frac{1}{p-q}$, $\alpha_0 = \frac{1}{2p}$, $\alpha_1 = \frac{1}{p}$.
%Let $G=(V,E)$ be an FH graph. Let $v\in V$ be a vertex in $G$. Let $p=pc_G(v)$, and $q=rc_G(v)$.
%For any vertex $u$, the hitting time $h_{uv}=\Theta((\frac{q}{p})^{d_G(v)})$ if $p < q$, $h_{uv}=\Theta(d_G(v) \times d_G(u,v))$ if $p=q$, and
%$h_{uv}=\Theta(d_G(u,v))$ if $p>q$.
\end{mythe}
\commentout{
According to Theorem \ref{thr:linearity} and Lemma \ref{lem:FWH}, 
%\todo{In the background section include a theorem that shows why hitting times
%can be written as the following linear equations. For the proof you can cite Norris.}
\begin{eqnarray}
h_0 & = & 0 \label{eq:boundary_N} \\
h_x & = & p h_{x-1} + q  h_{x+1} + c  h_{x} + 1 \label{eq:recursion} \\
h_D & = & p h_{D-1} + (1-p) h_D + 1 \label{eq:boundary_0} 
\end{eqnarray}

\noindent Consider the case $p \neq q$. We show that there exists $A$, $B$ and $C$ such that 
$h_x=A(\frac{p}{q})^x + Bx + C$ satisfies the above equations. Using Equation \ref {eq:boundary_N}:


\begin{eqnarray}
A(\frac{p}{q})^0 + B0 + C & = & 0 \nonumber \\
A & = & -C  \label{eq:a} 
%(p+q)(A+C) & = & (p+q)(\frac{Aq}{p}+B+C)  \nonumber  \\
%A & = & \frac{Bp}{p-q}+ \frac{p}{p^2-q^2}  \label{eq:a}
\end{eqnarray}

\noindent And using Equation \ref {eq:recursion} we have:
\begin{eqnarray}
 A(\frac{p}{q})^x + Bx + C & = & A\frac{p^x}{q^{x-1}} + Bp(x-1)+ Cp + \nonumber \\
 && A\frac{p^{x+1}}{q^{x}} + Bq(x+1)+Cq +  \nonumber \\ 
 && c(A(\frac{p}{q})^x + Bx + C) +  1 \nonumber \\
%(p+q) (A(\frac{p}{q})^x + Bx + C) & = & A(\frac{p}{q})^x(p+q) + Bx(p+q)+Bq-Bp + C(p+q) + 1  \nonumber \\
B & = & \frac{1}{p-q} = \beta_1 \label{eq:b}  
\end{eqnarray}

And finally using \ref{eq:a}, \ref{eq:b} and the boundary equation \ref{eq:boundary_0}:
\begin{eqnarray}
A(\frac{p}{q})^D + BD + C & = & 1+ (1-p) (A(\frac{p}{q})^D + BD + C) \nonumber \\ 
&& + p(A(\frac{p}{q})^{D-1} + B(D-1) + C)  \nonumber  \\
%p(A(\frac{p}{q})^D + BD + C) & = & pA(\frac{p}{q})^{D-1} + pBD-pB + pC + 1 \nonumber \\
%(p-q)(A(\frac{p}{q})^D) & = & (1-pB) \nonumber  \\
A &=& \frac{-q}{(p-q)^2}(\frac{q}{p})^D = -\beta_0\lambda^D \nonumber 
\end{eqnarray}

\noindent Therefore: 
\begin{eqnarray}
%h_x = (\frac{q}{(p-q)^2})(\frac{q}{p})^D (1-(\frac{p}{q})^x) + \frac{x}{p-q}  \nonumber \\
h_x =  \beta_0\left(\lambda^{D}-\lambda^{D - x}\right) + \beta_1x  \label{eq:FH_last} 
\end{eqnarray}

\noindent Since the above linear equations have a unique solution, Equation \ref{eq:FH_last} proves that $h_{x}=\Theta(\beta_0 \lambda^{D} + \beta_1x)$ if $q\neq p$. For $p=q$, a similar approach can be used to show that $h_x = \alpha_0(x- x^2) + \alpha_1Dx $, and therefore, $h_{x}=\Theta(\alpha_1 Dx)$.}

%Note that $h_{uv}$ increases monotonically with $q$ and decreases with $p$.
%%\martin{is it true? even near p=q?}
%From Theorem \ref{thr:homo} it follows that:
%
%\begin{align}
%\label{eq:homos}
%h_{uv} \in
%  \begin{dcases}
%  \Theta\left(\frac{q}{(p-q)^2}(\frac{q}{p})^D\right) &\text{if }  q > p \\
%  \Theta(x) &  \text{if } q <p  \\
%  \Theta(x \times D) & \text{if } q =p 
%  \end{dcases}
%\end{align}
%
The proof follows directly from Theorem \ref{thr:FWH} above.
When $q > p$, the main determining factors in the hitting time are the regress factors $\lambda = q/p$ and $D$; the hitting time 
grows exponentially with $D$ and polynomially, with degree $D$, with $\lambda$.
As long as $\lambda$ and $D$ are fixed, changing other 
 structural parameters such as the branching factor $b$ can only increase the hitting time linearly.
Note that also for $q > p$, it does not matter how close the start state is to the goal. The hitting time
mainly depends on $D$, the largest goal distance in the graph. 

\subsection{Analysis of the Transport Example}
Theorem \ref{thr:FH} helps explain the experimental results in Figure \ref{fig:transport}.  
In this example,
the plateau consists of all the states encountered before loading the package
onto one of the trucks. Once the package is loaded, $h_{FF}$ can guide the search
directly towards the goal. Therefore, the exit points of the plateau are the states in which 
the package is loaded onto a truck. 
%\martin{Why are truck locations not part of hFF???}
%\hootan{I don't get this quesiton..}
Let $m<n$ be the location of a most advanced truck in the chain.
For all non-exit states of the search space, $q \leq p$ holds:
 there is always at least one action which progresses towards a closest exit point -
move a truck from $c_m$ to $c_{m+1}$.
There is at most one action that regresses, in case $m>1$ and there is only
a single truck at $c_m$ which moves to $c_{m-1}$, thereby reducing $m$.

According to Theorem \ref{thr:bound}, setting $q = p$ for all states yields an upper
bound on the hitting time, since increasing the regress factor can only increase the hitting time.
By Theorem \ref{thr:FH}, $ -\frac{x^2}{2p}+(\frac{2D+1}{2p})x$ is an upper bound
for the hitting time. 
If the number of trucks is multiplied by a factor $M$,
 then $p$ will be divided by at most $M$, therefore the upper bound is
also multiplied by at most $M$. 
The worst case runtime bound grows only linearly with the number of trucks. In contrast, 
systematic search methods suffer greatly from increasing the number of vehicles,
since this increases
the effective branching factor $b$. The runtime of systematic search methods such as
greedy best first search, A* and IDA* typically grows as $b^d$
when the heuristic is ineffective. 

This effect can be observed in all planning problems where increasing the number of 
 objects of a specific type
does not change the regress factor. Examples are the vehicles
in transportation domains such as Rovers, Logistics, Transport, and Zeno Travel, 
or agents which share similar functionality but do not appear in the goal, 
such as the satellites in the satellite domain. 
All of these domains contain symmetries similar to the example above, 
where any one of several vehicles or agents can be chosen to achieve the goal.
Other examples are ``decoy'' objects which can not be used to reach the goal. 
Actions that affect only
 the state of such objects do not change the goal distance, 
 so increasing the number of such objects has no effect on $\textit{rf}$
 but can increase $b$. Techniques such as  
plan space planning, backward chaining planning, preferred operators,
or explicitly detecting and dealing with symmetries can often prune such actions.

Theorem \ref{thr:FH} suggests that if $q > p$ and the current state is close to 
an exit point in the plateau, then
systematic search is more effective, since random 
walks move away from the exit with high probability. 
This problematic behavior of RW can be fixed to some degree by using restarting random walks.

\section{Analysis of Restarting Random Walks}

%\hootan{The following lemma can be easily extended to weakly IRH. But since the next theorem is not proved for weakly IRH this 
%Lemma is also stated for IRH.}

\begin{mythe}\label{thr:IRH_BOUND}
Let $G=(V,E)$ be a FSH graph, $v \in V$, $p=pc_G(v)$ and $q=rc_G(v)$.
Let $R=RRW(G, s, r)$ with $0<r<1$. The hitting time $h_{sv} \in O\left(\beta\lambda^{d-1}\right)$, where 
$\lambda=\left(\frac{q}{p}+ \frac{r}{p(1-r)}+1\right)$, $\beta=\frac{q+r}{pr}$ and $d=d_G(s,v)$.
\end{mythe}

\begin{proof}
Let $d=d_G(s,v)$. According to Theorem \ref{thr:linearity} and Lemma \ref{lem:IRH},
\begin{eqnarray}
h_0 &=& 0 \nonumber \\
h_x &=& (1-r)\left( qh_{x+1} + ph_{x-1} + ch_x + 1\right) + rh_d \nonumber \\
h_D &=& (1-r)\left( ph_{D-1} + (1-p)h_D + 1\right) + rh_d \nonumber \\
\end{eqnarray}
Let $u_x = h_x - h_{x-1}$, then for $x < d$,
\begin{eqnarray}
%u_x &=& (1-r)\left(q(h{x+1} - h{x}) + p(h_{x-1} - h_{x-2}) + c(h_{x} - h_{x-1}) \right)  \nonumber \\
u_x  &=& (1-r)(qu_{x+1} + pu_{x-1} + cu_{x} )  \nonumber \\
        &=& \frac{(1-r)q}{1-c+cr}u_{x+1} + \frac{(1-r)p}{1-c+cr}u_{x-1}  \nonumber 
\end{eqnarray}
Since $\frac{(1-r)q}{1-c+cr}u_{x+1} \geq 0$ and $c=1-p-q$,
\begin{eqnarray}
u_x &\leq& \frac{(1-r)p}{q(1-r)+p(1-r)+r}u_{x-1}  
       \leq \lambda^{-1}u_{x-1}  \nonumber \\
%u_x &=& \prod_{i=x}^{N-1}(1-pr(i, u)) 
u_x &\leq& \lambda ^{d-x} u_{d}  \nonumber \\
h_x &\leq&  \sum_{i=1}^{x}u_i \leq u_{d}  \sum_{i=1}^{x} \lambda ^{d-i} 
\leq \lambda ^{d-x} (\frac{\lambda^{x}-1}{\lambda-1}) u_{d}  \nonumber 
\end{eqnarray}
The value $u_d$ is the progress time from the goal distance $d$. Therefore, 
\begin{eqnarray}
u_d &=& (1-r)\left(cu_d + q(1+ u_{d+1} + u_d) + 1\right) + ru_d  \nonumber 
\end{eqnarray}
Since $R$ restarts from $s$ with probability $r$, $u_{d+1} \leq \frac{1}{r}$. 
\begin{eqnarray}
%u_d &\leq& \left(r+ c(1-r) + q(1-r)\right) u_d + \frac{q}{r}(1-r) + (1-r) \nonumber \\
  u_d  &\leq& \left(r+ (1-r)(1-p)\right) u_d + ( \frac{q}{r} +1) (1-r) \nonumber \\
       &\leq& \frac{q+r}{rp}        
       \leq \beta \nonumber        
\end{eqnarray}
Furthermore, 
\begin{eqnarray}
h_d &=& u_d + h_{d-1} \leq \beta + \beta\lambda (\frac{\lambda^{d-1}-1}{\lambda-1}) \nonumber \\
h_d &\in& O\left(\beta\lambda^{d-1}\right) \label{eq:IRH}
\end{eqnarray}
\end{proof}

Therefore, by decreasing $r$, while $\lambda$ decreases, $\beta$ increases. Since the upper bound 
increases polynomially (the degree depends on $d(s,v)$) by $\lambda$ and only linearly by $\beta$, 
to keep the upper bound low a small value should
be chosen for $r$, especially when $d(s,v)$ is large. 
The $r$-value which minimizes the upper bound can be computed from Equation \ref{eq:IRH}. 

Comparing the values of $\lambda$ in the hitting time of RW and RRW,
Equations \ref{eq:IRH} and \ref{eq:FH}, 
the base of the exponential term for RRW 
exceeds the regress factor, the base of the exponential term for RW, by $\frac{r}{p(1-r)} + 1$.
For small $r$, this is close to $1$.

The main advantage of RRW over simple random walks is for small $d(s,v)$, since
the exponent of the exponential term is reduced from $D$ to $d(s,v)-1$.
Restarting is a bit wasteful when $d(s,v)$ is close to $D$. 

\subsection{A Grid Example}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth, height=0.26\textheight ]{RESOURCES/grid.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:grid} The Average number of generated states varying 
  the goal distance of the starting state (x-axis) and the restart rate in the Grid domain.}
\vspace{-0.2cm}
\end{figure}

Figure \ref{fig:grid} shows the results of RRW with restart rate $r \in \{ 0, 0.1, 0.01, 0.001 \}$
in a variant of the Grid domain with an $n\times n$ grid and a robot 
that needs to pick up a key at location $(n,n)$, to unlock a door at
$(0,0)$. 
The robot can only move left, up or down, except for 
the top row, where it is also allowed to move right, but not up.
%This setup yields regress factors larger than 1.

In this domain, all states before the robot picks up the key share the same $h_{FF}$ value. 
Figure \ref{fig:grid} shows the average number of states generated until this subgoal 
is reached, with the robot starting from different goal
distances plotted on the x-axis. Since the regress factors
are not uniform in this domain, Theorem \ref{thr:IRH_BOUND} does not apply directly. 
Still, comparing the results of RRW for different $r>0$ with
simple random walks where $r=0$, the experiment confirms the high-level predictions of
Theorem \ref{thr:IRH_BOUND}: 
RRW generates slightly more states 
than simple random walks when the initial goal distance is large, $d \geq 14$, and $r$ is small enough.
RRW is much more efficient when $d$ is small; for example it generates three orders of magnitude 
fewer states for $d=2$, $r=0.01$.    

%\vspace{-0.5cm}
\section{Related Work}
%\todo{Give a more comprehensive review of the previous work}
Random walks have been extensively studied in many different scientific fields including 
physics, finance and computer networking
% \cite{rw_network,rw_supply}.
\cite{rw_network,rw_finance,rw_supply}.
Linear algebra approaches to discrete and continuous random walks are well studied
\cite{Norris,Aldous,Yin,pardoux}. 
\commentout{
The standard approach to find the hitting time in a graph is to write the linear equations
for the hitting times as in Equations \ref{eq:example1} and \ref{eq:example2}, and solve 
them by linear algebra.
}
The current paper mainly uses methods for finding the hitting time of 
simple chains such as birth--death, and gambler chains \cite{Norris}. 
Such solutions can be expressed easily as functions
of chain features. 

Properties of random walks on finite graphs have been studied extensively \cite{lovasz}. 
One of the most relevant results is the $O(n^3)$ hitting time of a random walk in an 
undirected graph with $n$
nodes \cite{Brightwell}. 
%This is the best known upper-bound for the hitting time in 
%a general graph. 
However, this result does not explain the strong performance of random walks 
in planning search spaces which grow exponentially with the number of objects. 
Despite the rich existing literature on random walks, the application to 
the analysis of random walk planning seems to be novel. 

%\section{Conclusion}
\section{Discussion and Future Work}
Important open questions about the current work are how well it models
real planning problems such as IPC benchmarks, and real planning algorithms. 

\noindent \textbf{Relation to full planning benchmarks:}
Can they be described within these models
in terms of bounds on their regress factor? Can the models be extended to
represent the core difficulties involved in solving more planning domains?
What is the structure of plateaus within their state spaces, 
and how do plateaus relate to the overall difficulty of solving those instances?
Instances with small state spaces could be completely enumerated
and such properties measured.
For larger state spaces, can measurements of
true goal distances be approximated by heuristic evaluation,
by heuristics combined with local search, or by sampling?

\noindent \textbf{Effect of search enhancements:} 
To move from abstract, idealized algorithms towards more realistic planning algorithms,
it would be interesting to study the whole spectrum starting with the basic methods
studied in this paper up to
state of the art planners, switching on improvements one by one and studying their
effects under both RW and systematic search scenarios.
For example, the RW enhancements MHA and MDA \cite{Arvand} should be studied. 
\commentout{They utilize valuable information such as preferred operators and the 
density of dead ends to
bias action selection in random walks.}

\noindent \textbf{Extension to non-fair graphs:}
Generalize Theorem \ref{thr:IRH_BOUND} to 
non-fair graphs, 
where an action can increase the goal distance by more than one. Such graphs can be used
to model planning problems with dead ends. 
%A restarting random walk on average wastes $\frac{1}{r}$ steps in a dead end. 
%\martin{really? why? what if it always falls into a dead end like in Sokoban? clarify}
%Depending on the size and structure of the dead end region, 
%this could 
%compare favorably to a systematic search which might exhaustively explore such a dead end. 

\noindent \textbf{Hybrid methods:}
Develop theoretical models for methods that combine random walks with
using memory and systematic search such as \cite{roamer,Xie2012a}.

%\section{Discussion}
%Given a homogeneous graph $G=(V,E)$, consider a search problem to find a path from 
%the vertex $u$ to the vertex $v$. It is easy to show that for any vertex $x$ in the graph $rf_G(v) <= |S_G(x)|$ and
%$d_G(x,v) <= d_G(v)$. Therefore, random walks tend to decrease the {\it effective branching factor} ($mrf_G(v)$) while they increase {\it the effective
%depth} of the search ($d_G(x,v)$). Therefore, there is a tradeoff here: if the $d_G(u,v)$ is close to $d_G(v)$, then it is better to use
%random walks for the search but if $u$ is very close to $v$ then it might be better to use more systematic search algorithms like
%breadth fist search.
%
%Most of the graph search algorithms use duplicate detection to decrease the effective branching factor. 
%The question is can we combine the power of random walks and duplicate detection to reduce the effective
%branching factor further. The answer is yes and the algorithm will be presented in the next version of this paper. 


\vskip 0.2in
\bibliography{socs}
\bibliographystyle{aaai}

\end{document}






