\documentclass[twoside,11pt]{article}
\usepackage{jair}
\usepackage{graphicx}
\usepackage{theapa} 
\usepackage{rawfonts} 
\usepackage{amsthm} 
\usepackage{amsmath} 
\usepackage{mathtools}



\usepackage[usenames]{color} % Only used in comment commands
\definecolor{Blue}{rgb}{0,0.16,0.90}
\definecolor{Red}{rgb}{0.90,0.16,0}
\definecolor{DarkBlue}{rgb}{0,0.08,0.45}
\definecolor{ChangedColor}{rgb}{0.9,0.08,0}
\definecolor{CommentColor}{rgb}{0.2,0.8,0.2}
\definecolor{ToDoColor}{rgb}{0.1,0.2,1}

% *** Use this definition of the command to show the comments ***
\newcommand{\todo}[1]{\textbf{\color{ToDoColor} TODO: #1}}
\newcommand{\changed}[0]{\textbf{\color{ChangedColor} Changed: }}
\newcommand{\hootan}[1]{\textbf{\color{CommentColor} /* #1  (hootan)*/}}
\newcommand{\martin}[1]{\textbf{\color{CommentColor} /* #1  (martin)*/}}
\newcommand{\commentout}[1]{}


\newtheorem{mydef}{Definition}
\newtheorem{mythe}{Theorem}
\newtheorem{mylem}{Lemma}
\begin{document}

\title{A Theoretical Framework for Studying Random Walk Planning}
\author{\name Hootan Nakhost \email nakhost@ualberta.ca \\
 \addr University of Alberta, 
Edmonton, Alberta, Canada\\
\AND 
\name Martin M\"uller \email mmueller@ualberta.ca\\
\addr University of Alberta, 
Edmonton, Alberta, Canada\\
}

% For research notes, remove the comment character in the line below.
% \researchnote

\maketitle


\begin{abstract}
Random walks are a relatively new component used in several state of the art satisficing planners.
Empirical results have been mixed: while the approach clearly outperforms more systematic search methods
such as weighted A* on many planning domains, it fails in many others. So far, the explanations for these
empirical results have been somewhat ad hoc.
This paper proposes a formal framework for comparing the performance of random walk and systematic search methods.
Fair homogenous graphs are
proposed as graph classes that 
represent characteristics of the state space of prototypical planning domains, 
while still allowing a theoretical analysis of the performance of both random walk 
and systematic search algorithms.
This gives well-founded insights into the relative strength and weaknesses of the approaches.
The close relation of the models to some well-known planning domains is shown.

One main result is that in contrast to systematic search, where the branching factor plays a decisive role,
the performance of random walk methods is determined to a large degree by the Regress Factor, 
the ratio between the probabilities of progressing towards and regressing away from the goal. 
By considering both branching and regress factors of a state space, 
it is possible to explain the relative performance 
of random walk and systematic search methods. 
\end{abstract}

\section{Introduction}
Random walks, which are paths through a search space that follow
successive randomized state transitions, 
are a main building block of prominent 
search algorithms such as Stochastic Local Search techniques 
for SAT \cite{selman:etal:aaai-92,wei:etal:jsat-08} and 
Monte Carlo Tree Search in game playing and puzzle solving
\cite{dave,Finnsson,DBLP:conf/ijcai/Cazenave09}. 

Inspired by these methods, several recent satisficing planners also
utilize random walk techniques. Identidem \cite{identidem} performs 
a hill climbing search that uses random walks to escape from plateaus or saddle points.
All visited states are evaluated using a heuristic function. The random walks are biased 
towards the states with lower heuristic values. Arvand \cite{Arvand} 
takes a more radical approach:
it relies exclusively on a set of
random walks to determine the next state in its local search.
For efficiency, it only evaluates  the endpoints of those random walks. 
Arvand also learns to bias its random walks towards more promising search regions over time. 
Roamer \cite{roamer} enhances its best-first search (BFS) with random walks,
aiming to escape from \textit{search plateaus} where the heuristic is uninformative. 

While the success of random walk methods in other research areas serves as
a good general motivation, such work did not provide
an explanation for why these planners perform well.  
Three points have been noted as
main advantages of random walks for planning:
\begin{itemize}
\item Random walks are more effective than systematic search
approaches for escaping from regions where
heuristics provide no guidance \cite{identidem,Arvand,roamer}.
\item Increased sampling of the search space by random walks adds a beneficial
\textit{exploration} component to balance the \textit{exploitation} of the heuristic in planners \cite{Arvand}.  
\item  Combined with proper \textit{restarting} mechanisms,
random walks can avoid most of the time
wasted by systematic search in the dead-ends. Through restarts, random walks can rapidly back out of 
unpromising search regions \cite{identidem}. 
\end{itemize}

While these explanations are intuitively appealing, there is little direct
empirical or theoretical evidence supporting them. 
Typically, random walk planners are evaluated by measuring their coverage, 
runtime, or plan quality. While such results demonstrate that random walks can perform well
in practice, they provide no detailed insights into
\textit{why} they work. 
For example, there have been no measurements which directly show that random walks really do
escape more quickly from plateaus than other, more systematic approaches. 

\subsection{A First Motivating Example}
The main goal of the current paper is a careful theoretical investigation of the first 
point above - the question of how different search algorithms used in planning are able to 
escape from plateaus. As an example, consider the following well-known plateau for
the FF heuristic, $h_{FF}$, discussed in \cite{Helmert04}. 
Recall that $h_{FF}$ estimates the goal distance by
solving a relaxed planning problem in which all the negative effects of actions are ignored. 
Consider a transportation domain in which trucks are used to move packages between $n$ locations
connected in a single chain $c_1,\cdots,c_n$.
The goal is to move one package from $c_n$ to $c_1$.
%\hootan{Do we need a picture?}
%\hootan{Do we need to explain why this is a plateau?}
Figure \ref{fig:transport} shows the results of a basic scaling experiment on this domain with $n=10$ locations,
varying the number of trucks $T$ from 1 to 20. All trucks start at $c_2$. 
The results compare basic
Monte Carlo Random Walks (MRW) from Arvand-2011 and basic Greedy Best First Search (GBFS) from LAMA-2011. 
Figure \ref{fig:transport} shows how the runtime of BFS grows quickly 
with the number of trucks $T$
until it exceeds the memory limit of 64 GB. 
This is expected since the effective branching factor grows with $T$. However,
the increasing branching factor has only little effect on MRW: the runtime grows only linearly in $T$. 

\subsection{Choice of Basic Search Algorithms - why No Enhancements?}

All the examples in this paper use state of the art implementations of
basic, unenhanced search methods.
GBFS implemented in LAMA-2011 is used as a representative
of the systematic search methods, while the MRW implementation of Arvand-2011 
represents random walk methods.
Both programs use $h_{FF}$ for their evaluation.
Enhancements such as preferred operators in LAMA and Arvand, multi-heuristic search in LAMA,
or Monte Carlo Helpful Actions (MHA) in Arvand are switched off. 

The reasons are:
1. This paper studies theoretical models that can explain the substantially different behavior of random walk and
systematic search methods. Simple search methods allow to align the theoretical results closely with
practical experiments.
2. Enhancements may benefit both methods in different ways, or be only applicable to one method, so may
confuse the picture. Studying theoretical models that can handle such enhancements remains as future work.
3. The focus of this paper is to understand the behavior of these two search paradigms in regions
where there is a lack of guiding information, such as plateaus. Therefore, in some examples even
a blind heuristic is used. While enhancements can certainly have a great influence on search parameters
such as branching and regress factors or search depth, the authors believe that the fundamental differences
in search behavior will remain.
%This type of study in no way limits the applicability of the results
%because no matter how good the enhancements and the heuristic functions are designed
%there will be still search regions where none of these can provide any guidance and the power of the
%search to find a way out is the thing that matters.  

\subsection{Homogenous and weakly homogenous graphs}
Two classes of graphs which model the search space of planning problems are proposed, in order to study
the behavior of search algorithms: 
\textit{Homogenous} and \textit{weakly homogenous} graphs. 
The key property used to analyze random walks on these graphs is their \textit{regress factor} $\mathit{rf}$: 
the ratio of the probability of the random walk \textit{progressing} towards a goal and the 
probability of \textit{regressing} away from 
a goal. In homogenous graphs, almost all nodes share the same $\mathit{rf}$. 
Theorem \ref{thr:FH} shows that $\mathit{rf}$ plays almost the same
role as the branching factor $b$ in systematic search: runtime grows exponentially with a base of $\mathit{rf}$ 
as long as 
$\mathit{rf} > 1$. In practice, large parts of the state 
space of tasks in Transport and Grid are close to homogenous graphs.

In the \textit{weakly homogenous graph} model, $\mathit{rf}$ is no longer constant over the whole graph, 
but it depends only on the distance to a goal. Theorem  \ref{thr:FWH} extends the analysis to this graph class.
%\martin{You said THE goal, I say A goal. Does it matter? Is the definition general for multiple goal states? If not need to
%put in the restriction somewhere.}
%\martin{what are the results for this case?}
%Theorem \ref{thr:weak} shows that the hitting time in this graph is
%a function of the multiplication of all $\mathit{rf}$, defined for each goal distance in the graph.
\commentout{
For both models examples that relate the models to standard
planning benchmarks are given, and possible
ways to improve the basic random walks  are discussed.} 
The state space of Gripper is close to a weakly homogenous graph.

\subsection{Restarting Random Walks (RRW)}

Besides $\mathit{rf}$, the other key variable affecting the average runtime of basic random walks
is the largest goal distance $D$ in the whole graph, which appears in the exponent. 
For large $D$, the \textit{restarting random walks} (RRW) model
can offer a substantial performance advantage. At each search step,
RRW restart from a fixed initial state $s$ with probability $r$. 
Theorem \ref{thr:IRH} proves that the expected runtime of RRW
depends only on the goal distance of $s$, not on $D$. 

\begin{figure}
\centering
\includegraphics[width=0.47\textwidth ]{RESOURCES/transport.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:transport} Average runtime of BFS and MRW varying the number of trucks (x-axis) in Transport domain. Missing data means the planner exceeds the memory limit.}
\vspace{-0.2cm}
\end{figure}

\section{Background and Notation}
Notation follows standard references such as \cite{Norris}.
Throughout the paper the notation $P(e)$ denotes the 
probability of occurring an event $e$. In all definitions, let $G=(V,E)$ be a directed graph.

\begin{mydef}[Markov Process]
Let $M= X_0, \dots, X_N$ be a sequence of random variables. $M$ is a Markov process
iff $P(X_n = j_n | X_{n-1} = j_{n-1}, \dots, X_{0} = j_{0}) = P(X_n = j_n | X_{n-1} = j_{n-1})$. The notation 
$p_{uv}$ denotes the transition probability $P(X_n = u | X_{n-1} = v)$ of $M$.
\end{mydef}

\begin{mydef}[Distance $d_G$]
For $u,v \in V$, $d_G(u,v)$ is the length of a shortest path from $u$ to $v$. 
The distance $d_G(v)$ of a \textit{single} vertex $v$ in $G$ is the length
of a longest shortest path from a node in $G$ to $v$: $d_G(v)=max_{x \in V} d_G(x, v)$
\end{mydef}

\begin{mydef}[Neighborhood]
The \textit{neighborhood} of $u \in V$ is the set of all vertices
in distance 1 of $u$:  $N_G(u)=\{v | v \in V \wedge d_G(u,v) = 1\}$.
\end{mydef}
%\martin{why neighborhood, not successors? would neighborhood not usually include (v,u) as well?}
\begin{mydef}[Random Walk]
A random walk on $G$ is a Markov process $M_G= X_0, \dots, X_N$ where for all $0 \leq n \leq N$, $X_n$ 
is a random variable with the range $V$. The probability transition of $M_G$ is defined as follows: $p_{uv} = \frac{1}{|N_G(u)|}$ if $(u,v) \in E$,
and $p_{uv} = 0$ if $(u,v) \notin E$.
\end{mydef}

\begin{mydef}[hitting time]
For $u,v \in V$, the hitting time $h_{uv}$ is the expected number of steps in a random walk that starts from $u$ 
and reaches $v$ for the first time.
\end{mydef}

\begin{mydef}[progress time]
For $u,v \in V$, the progress time $u_{xv}$ is the expected number of steps in a random
walk that starts from $x$ until it gets one step closer to $v$ for the first time. 
\end{mydef}

\todo{prove that the hitting time be written as the sum of unit progress times.}

\begin{mydef}[Regress Factor]
Let $u,v \in V$, and $X: V \rightarrow V$ a random variable
with the following PMF for all $(i, j) \in V^2$: 

\begin{align}
\label{eq:homos}
P(X(i) = j) =
  \begin{dcases}
  \frac{1}{N_G(i)}  &\text{if }  (i, j) \in E \\
  0   &\text{if } (i, j)  \notin E \\
  \end{dcases}
\end{align}


%that given a vertex $k$ selects a vertex from $N_G(k)$ uniformly at random.\\

\noindent The progress chance of $u$ regarding $v$, $pc(u,v)$, is 
the probability of getting closer to $v$ after one random step at $u$: $P(d_G(X_u, v) = d_G(u, v)-1)$. \\

\noindent The regress chance of $u$ regarding $v$, $rc(u,v)$, is 
the probability of getting further away from $v$ after one random step at $u$: $P(d_G(X_u, v) = d_G(u, v)+1)$.\\ 

\noindent The stalling chance of $u$ regarding $v$, $sc(u,v)$, is 
the probability of staying at the same distance of $v$ after one random step at $u$: $P(d_G(X_u, v) = d_G(u, v))$. \\

\noindent The infinite regress chance of $u$ regarding $v$, $irc(u,v)$, is 
the probability of reaching a vertex with infinite distance of $v$ after one random step at $u$: $P(d_G(X_u, v) = \infty)$. \\

\noindent The regress factor of $u$ regarding $v$ is $\textit{rf}(u,v)=\frac{rc(u,v)}{pc(u,v)}$.
\end{mydef}


%\begin{mydef}[Distance]
%Let $G=(V,E)$ denote a directed graph. Let $u$ and $v$ be two vertices in the graph, 
%then the distance function $d_G(u,v)$ is the length of the shortest path from $u$ to $v$. 
%If there is no path from $u$ to $v$, then $d_G(u,v) = \infty$, and $u$ is a deadend.
%It is also beneficial to define the distance function for a single vertex $v$: $d_G(v)$ is the length
%of the largest shortest path to $v$, i.e., $d_G(v)=max_{x \in V} d_G(x, v)$
%\end{mydef}
%\begin{mydef}[Neighborhood]
%Let $G=(V,E)$ denote a directed graph. Let $u$ be a vertex in the graph. The neighborhood of $u$ is the set of all vertices that
%are in distance 1 of $u$:  $N_G(u)=\{v | v \in V \wedge d_G(u,v) = 1\}$.
%\end{mydef}
%
%\begin{mydef}[Random Walk]
%Let $G=(V,E)$ be a graph. A random walk on $G$ is a Markov chain $M_G$ such that the vertices in the graph
%are the states in $M_G$ and for any two vertices $u, v \in V$, the transition probability $p_{uv} = \frac{1}{|N_G(u)|}$ if $(u,v) \in E$,
%and $p_{uv} = 0$ if $(u,v) \notin E$.
%\end{mydef}
%
%\commentout{
%\begin{mydef}[Restarting Random Walk]
%Let $G=(V,E)$ be a graph, $s \in V$, and $ r \in [0, 1]$. A restarting random walk $RRW(G, s, r)$
%is a Markov chain $M_G$ such that the vertices in the graph
%are the states in $M_G$ and for any two vertices $u, v \in V$, the transition probability 
%\[
% p_{uv}=  
%  \begin{dcases}
%   \frac{1-r}{|N_G(u)|}&  \text{if } (u,v) \in E, v \neq x \\
%   r + \frac{1-r}{|N_G(u)|}&  \text{if } (u,v) \in E, v = x \\ 
%   0 &  \text{if } (u,v) \notin E, v \neq x \\ 
%   r &  \text{if } (u,v) \notin E, v = x \\ 
%  \end{dcases}
%\]
%
%% A restarting random walk is a random walk that at each vertex $v$ with the probability $r$ transitions to the starting
%%vertex and with the probability $1-r$ transitions to one of the neighbors of $v$ uniformly randomly. 
%\end{mydef}}
%
%\begin{mydef}[hitting time]
%The hitting time $h_{uv}$ is the expected number of steps in a random walk that starts from $u$ until it reaches $v$ for the first time.
%\end{mydef}
%
%\begin{mydef}[progress time]
%The progress time $p_{uv}$ is the expected number of steps in a random
%walk that starts from $u$ until it gets one step closer to $v$ for the first time. 
%\end{mydef}
%
%\begin{mydef}[Regress Factor]
%Let $G=(V,E)$ be a directed graph. Let $u$ and $v$ be two vertices in the graph, and $X: V \rightarrow V$ be a uniformly random variable
%that given a vertex $k$ selects a random vertex from $N_G(k)$.\\ 
%
%\noindent The progress chance of $u$ regarding $v$, $pc(u,v)$, is 
%the probability of getting closer to $v$ after one random step at $u$: $pr(d_G(X_u, v) = d_G(u, v)-1)$. \\
%
%\noindent The regress chance of $u$ regarding $v$, $rc(u,v)$, is 
%the probability of getting further from $v$ after one random step at $u$: $pr(d_G(X_u, v) = d_G(u, v)+1)$. \\
%
%\noindent The stalling chance of $u$ regarding $v$, $sc(u,v)$, is 
%the probability of staying at the same distance of $v$ after one random step at $u$: $pr(d_G(X_u, v) = d_G(u, v))$. \\
%
%\noindent The infinite regress chance of $u$ regarding $v$, $irc(u,v)$, is 
%the probability of reaching a vertex with infinite distance of $v$ after one random step at $u$: $pr(d_G(X_u, v) = \infty)$. \\
%
%\noindent The regress factor of $u$ regarding $v$ is $\textit{rf}(u,v)=\frac{rc(u,v)}{pc(u,v)}$.
%\end{mydef}


\subsection{Plateaus, Exit Points and Exit Time}
A \textit{plateau} $P \subseteq V$ is a connected subset of states which have the same heuristic value $h_P$.
A state $s$ is an \textit{exit point} of $P$ if $s \in N_G(p)$ for some $p \in P$,
with heuristic value $h(s) < h_G$. The \textit{exit time} of a 
random walk on a plateau $P$ is the expected number
of steps in the random walk until it reaches an exit point for the first time. 
\commentout{
In the corresponding graph $G(V, E)$ of a plateau $P$, there is a one to one relation between $V$ and the states
on the plateau, and for $u,v \in V$, $(u,v) \in E$ iff there is an action from the state represented by $u$
to the state represented by $v$. Now if the exit point(s) 
in a plateau is defined as the goal(s) of a RW in the corresponding
graph, then the hitting time in the graph equals the exit time in the plateau.}

\section{Fair Homogeneous Graphs}
A fair homogeneous (FH) graph $G$ is the simplest state space model introduced here. 
\textit{Homogenuity} means that both progress and regress
chances are constant for all  nodes in $G$. \textit{Fairness} means that 
an action can change the goal distance by at most one.

\begin{mydef}[Homogeneous Graph]

Given $v \in V$,
$G$ is $v$-homogeneous iff there exist two real functions $pc_G(x)$ and $rc_G(x)$ 
with domain $V$ and range $[0, 1]$ such that for any vertex $u \in V$ the following
two conditions hold:
\begin{enumerate}
%\item $sc(u,v) = sc_G(v)$.
\item If $u \neq v$ then $pc(u,v) = pc_G(v)$.
\item If $d(u,v) < d_G(v)$ then $rc(u,v) = rc_G(v)$. 
\end{enumerate}
G is homogeneous iff it is $v$-homogeneous for all $v \in V$.  
The functions $pc_G(x)$ and $rc_G(x)$ are respectively called
the progress and the regress chance of $G$ regarding $x$. 
The regress factor of $G$ regarding $x$ is defined by $\textit{rf}_G(x)=rc_G(x)/pc_G(x)$.
\end{mydef}

\begin{mydef}[Fair Graph]
$G$ is fair for $v \in V$ iff for all $u \in V$, $pc(u,v)+rc(u,v)+sc(u,v) = 1$. $G$
is fair if it is fair for all $v \in V$. 
\end{mydef}

In a fair graph, a random step can change the goal distance by at most one unit. 

\begin{mylem}
Let $G=(V,E)$ be an FH graph. For $u, x, v \in G$, $h_{uv}=h_{xv}=h_d$
If $d_G(u,v)=d_G(x,v)=d$. 
\end{mylem}

\begin{proof} 
Let $p=pc_G(v)$ and $q=rc_G(v)$ be the progress and regress chances in $G$. 

Let $P(i,u)$ be the probability of a RW of length $i$ that starts from $u$ and ends in $v$:
\begin{eqnarray}
P(i,u)&=&\sum_{0 \leq k \leq i} {i \choose{k+d, k, i-2k-d(u,v)}} p^{k+d} q^{k} (1-p-q)^{i-2k-d(u,v)} \nonumber \\
	&=&\sum_{0 \leq k \leq i} {i \choose{k+d, k, i-2k-d(x,v)}} p^{k+d} q^{k} (1-p-q)^{i-2k-d(x,v)} \nonumber \\
	&=&P(i,x) \nonumber 	
\end{eqnarray}

Let $P_f(i, u)$ be the probability of a RW of length $i$ that starts from $u$ and ends in $v$ for the first time. 
\begin{eqnarray}
P_f(i,u)&=&P(i, u) \prod_{j=1}^{i-1}(1-P(j, u)) \nonumber \\
	&=&P(i, x) \prod_{j=1}^{i-1}(1-P(j, x)) \nonumber \\
	&=&P_f(i,x) \nonumber 	
\end{eqnarray}

Then, 
\begin{eqnarray} 
h_{uv}&=& \sum_{0 \leq i} i P_f(i, u) \nonumber \\
	  &=& \sum_{0 \leq i} i P_f(i ,x)  \nonumber \\
	  &=& h_{xv}   \nonumber 
\end{eqnarray}

The notation $h_d$ denotes the hitting time of a random walk starting from a node in the distance $d$ of
the goal in an FH graph.
\end{proof}

\begin{mythe} \label{thr:FH}

For $u,v \in V$, let 
$p=pc_G(v)$, $q=rc_G(v)$, $D = d_G(v)$, and $x=d_G(u,v)$.
Then the hitting time $h_{uv}$ is: 
\[
h_{uv}=
  \begin{dcases}
   \beta_0\left(\lambda^{D}-\lambda^{D - x}\right) + \beta_1x&\text{if }  q \neq p \\
   \alpha_0(x- x^2) + \alpha_1Dx  &  \text{if } q =p
  \end{dcases}
\]
where $\lambda = \frac{q}{p}$, $\beta_0 = \frac{q}{(p-q)^2}$, $\beta_1 = \frac{1}{p-q}$, $\alpha_0 = \frac{1}{2p}$, $\alpha_1 = \frac{1}{p}$.
%Let $G=(V,E)$ be an FH graph. Let $v\in V$ be a vertex in $G$. Let $p=pc_G(v)$, and $q=rc_G(v)$.
%For any vertex $u$, the hitting time $h_{uv}=\Theta((\frac{q}{p})^{d_G(v)})$ if $p < q$, $h_{uv}=\Theta(d_G(v) \times d_G(u,v))$ if $p=q$, and
%$h_{uv}=\Theta(d_G(u,v))$ if $p>q$.
\end{mythe}

\begin{proof}
According to the definition of the hitting time we have the following equations: 
\todo{In the background section include a theorem that shows why hitting times
can be written as the following linear equations. For the proof you can cite Norris.}
\begin{eqnarray}
h_0 & = & 0 \label{eq:boundary_N} \\
h_x & = & p h_{x-1} + q  h_{x+1} + (1-p-q)  h_{x} + 1 \label{eq:recursion} \\
h_D & = & p h_{D-1} + (1-p) h_D + 1 \label{eq:boundary_0} 
\end{eqnarray}

\noindent Consider the case $p \neq q$. We show that there exists $A$, $B$ and $C$ such that 
$h_x=A(\frac{p}{q})^x + Bx + C$ satisfies the above equations. Using Equation \ref {eq:boundary_N}:


\begin{eqnarray}
A(\frac{p}{q})^0 + B0 + C & = & 0 \nonumber \\
A & = & -C  \label{eq:a} 
%(p+q)(A+C) & = & (p+q)(\frac{Aq}{p}+B+C)  \nonumber  \\
%A & = & \frac{Bp}{p-q}+ \frac{p}{p^2-q^2}  \label{eq:a}
\end{eqnarray}

\noindent And using Equation \ref {eq:recursion} we have:
\begin{eqnarray}
 A(\frac{p}{q})^x + Bx + C & = & A\frac{p^x}{q^{x-1}} + Bp(x-1)+ Cp + A\frac{p^{x+1}}{q^{x}} + Bq(x+1)+Cq  \nonumber \\ 
 && + (1-p-q)(A(\frac{p}{q})^x + Bx + C) +  1 \nonumber \\
(p+q) (A(\frac{p}{q})^x + Bx + C) & = & A(\frac{p}{q})^x(p+q) + Bx(p+q)+Bq-Bp + C(p+q) + 1  \nonumber  \\
B & = & \frac{1}{p-q} = \beta_1 \label{eq:b}  
\end{eqnarray}

And finally using \ref{eq:a}, \ref{eq:b} and the boundary equation \ref{eq:boundary_0}:
\begin{eqnarray}
A(\frac{p}{q})^D + BD + C & = & (1-p) (A(\frac{p}{q})^D + BD + C) + p(A(\frac{p}{q})^{D-1} + B(D-1) + C) + 1 \nonumber  \\
p(A(\frac{p}{q})^D + BD + C) & = & pA(\frac{p}{q})^{D-1} + pBD-pB + pC + 1 \nonumber \\
(p-q)(A(\frac{p}{q})^D) & = & (1-pB) \nonumber  \\
A &=& \frac{-q}{(p-q)^2}(\frac{q}{p})^D = -\beta_0\lambda^D \nonumber 
\end{eqnarray}

\noindent Therefore: 
\begin{eqnarray}
%h_x = (\frac{q}{(p-q)^2})(\frac{q}{p})^D (1-(\frac{p}{q})^x) + \frac{x}{p-q}  \nonumber \\
h_x =  \beta_0\left(\lambda^{D}-\lambda^{D - x}\right) + \beta_1x  \label{eq:FH_last} \\
\end{eqnarray}

\noindent Since the above linear equations have a unique solution, Equation \ref{eq:FH_last} proves that $h_{x}=\Theta(\beta_0 \lambda^{D} + \beta_1x)$ if $q\neq p$. For $p=q$, a similar approach can be used to show that $h_x = \alpha_0(x- x^2) + \alpha_1Dx $, and therefore, $h_{x}=\Theta(\alpha_1 Dx)$.
\end{proof}

%Note that $h_{uv}$ increases monotonically with $q$ and decreases with $p$.
%%\martin{is it true? even near p=q?}
%From Theorem \ref{thr:homo} it follows that:
%
%\begin{align}
%\label{eq:homos}
%h_{uv} \in
%  \begin{dcases}
%  \Theta\left(\frac{q}{(p-q)^2}(\frac{q}{p})^D\right) &\text{if }  q > p \\
%  \Theta(x) &  \text{if } q <p  \\
%  \Theta(x \times D) & \text{if } q =p 
%  \end{dcases}
%\end{align}
%

When $q \neq p$ the hitting time exponentially grows with increasing the regress factor $\textit{rf}=\lambda = q/p$; as long as
 \textit{rf} is fixed, changing other 
 structural parameters such as the branching factor $b$ can only increase the hitting time linearly.
 For $q > p$, it does not matter how close the start state is to the goal. The hitting time
depends on $D$, the largest goal distance in the graph. 

\subsection{Analysis of the Transport Example}
Theorem \ref{thr:FH} helps explain the experimental results in Figure \ref{fig:transport}.  
In this example
the plateau consists of all the states encountered before loading the package
onto one of the trucks. Once the package is loaded, $h_{FF}$ can guide the search
directly towards the goal. Therefore, the exit points of the plateau are the states in which 
the package is loaded onto a truck. 
%\martin{Why are truck locations not part of hFF???}
%\hootan{I don't get this quesiton..}
Let $m<n$ be the location of a most advanced truck in the chain.
For all non-exit states of the search space, $q \leq p$ holds:
 there is always at least one action which progresses towards a closest exit point -
move a truck from $c_m$ to $c_{m+1}$.
There is at most one action that regresses, in case $m>1$ and there is only
a single truck at $c_m$ which moves to $c_{m-1}$.
Setting $q = p$ for all states yields an upper
bound on the hitting time, since increasing the ratio can only increase the hitting time. 
See theorem \ref{thr:bound} for details.
By Theorem \ref{thr:FH}, $ -\frac{x^2}{2p}+(\frac{2D+1}{2p})x$ is an upper bound
for the hitting time. 
%Now if we increase the number of trucks both $q$ and $p$ values will decrease but the regress factors
%do not change. 
If the number of trucks is multiplied by a factor $M$,
 then $p$ will be divided by at most $M$, therefore the upper bound is
also multiplied by $M$. 
The worst case runtime bound grows only linearly with the number of trucks. In contrast, 
systematic search methods suffer greatly from increasing the number of vehicles,
since this increases
the effective branching factor $b$, and the runtime of methods such as
greedy best first search, A* and IDA*, typically grows as $b^d$. 

This effect can be observed in all planning problems where increasing the number of 
 objects of a specific type
does not change the regress factor. Examples are the vehicles
 in transportation domains such as Rovers, Logistics, Transport, and Zeno Travel, 
which do not appear in the goal propositions.
%or agents who have similar functionalities and do not appear in the goal, e.g., satellites in the satellite domain. 
Another example are ``decoy'' objects which can not be used to reach the goal. Actions that affect only
 the state of such
objects do not change the goal distance, so increasing the number of such objects has no effect on $\textit{rf}$
 but can blow up $b$. Of course, techniques such as  
plan space planning, backward chaining planning, or preferred operators can often prune such actions.
%\martin{ mention helpful actions? Relaxed planning graph? FF heuristic?} 
%\hootan{usually actions that change the state of these objects do not change the heuristic values.}

Theorem \ref{thr:FH} suggests that if $q > p$ and the current state is close to 
an exit point in the plateau, then using
systematic search is more effective. In this case random 
walks are extremely inefficient since they move away from the exit with high probability. 
This problem can be fixed to some degree by using restarting random walks.

\section{Fair Weakly Homogenous Graphs}
\textit{Fair weakly homogenous (FWH) graphs} generalize fair homogenous graphs by having 
$pc$ and $rc$ depend on the goal distance instead of being constant. 

\begin{mydef}[weakly homogenous graph]
For $v \in V$, $G$ is weakly $v$-homogeneous iff 
there exist two real functions 
$pc_G(x, d)$ and $rc_G(x, d)$,
mapping the domain $V\times \{0, 1, \dots, d_G(v)\}$ to the range $[0, 1]$,
such that for any two vertices $u,x \in V$ 
with $d_G(u,v)=d_G(x,v)$ the following two conditions hold:
\begin{enumerate}
\item If $d_G(u,v) \neq 0$, then\\ $pc_G(u,v) = pc_G(x,v) = pc_G(v, d_G(u,v))$.
\item $rc_G(u,v) = rc_G(x,v) = rc_G(v, d_G(u,v))$.
\end{enumerate}
G is weakly homogeneous iff it is weakly $v$-homogeneous for all $v \in V$.  
$pc_G(x, d)$ and $rc_G(x,d)$ are called
 progress chance and regress chance of $G$ regarding $x$. 
 The regress factor of $G$ regarding $x$ is defined by $\textit{rf}_G(x,d)=rc_G(x,d)/pc_G(x,d)$.
\end{mydef}

\begin{mylem} \label{lem:FWH}
Let $G=(V,E)$ be fair weakly homogenous.
Then, for all $v,x,x' \in V$  with $d_G(x,v)=d_G(x',v)=d$, $h_{xv}=h_{x'v}$.
\end{mylem}


\begin{proof}
Let $D=d_G(v)$, $p_i = pc_G(v,i)$ and $q_i=rc_G(v,i)$.
For any $v' \in V$, let $u_{v'v}$ be the expected number of steps in a random
walk that starts from $v'$ until it progresses (by one) towards $v$ for the first time. 
By induction on $d$, we show that $u_{xv}=u_{x'v}=u_d$, where 
\begin{align}
\label{eq:unit}
u_d=
  \begin{dcases}
  \frac{q_d}{p_d}u_{d+1}+\frac{1}{p_d} &\text{if }  d < D  \\
  \frac{1}{p_d} &  \text{if } d = D \\
  \end{dcases}
\end{align}

For $d=D$, before reaching the goal distance $D-1$ 
the random walk is always at a state with goal distance $D$.
The probability of progressing towards
$v$ is $p_D$, so on average $\frac{1}{p_D}$ steps are needed
to progress, and $u_{xv}=u_{x'v}= \frac{1}{p_D}=u_d$.

Suppose the lemma holds for $d+1$. To show that it also holds for $d$,
let $X$, $Y$, and $Z$ be three random variables that respectively 
measure the number of times the random walk visits a state with
goal distance $d$ before reaching the goal distance $d-1$, the number of steps between two consecutive such visits, 
and the total length of the random walk. Then, $Z=X \times Y + 1$. 

Call the last step at distance $d$, before progressing to $d-1$, a \textit{successful $d$-visit}, and all previous
visits, which do not immediately after reach $d-1$, \textit{unsuccessful  $d$-visits}.
After each unsuccessful  $d$-visit, the random walk
transitions to distance $d+1$ with probability $\frac{q_d}{1-p_d}$ and stalls at distance $d$
with probability $\frac{1-q_d-p_d}{1-p_d}$.
In the former case, the random walk performs an expected $u_{d+1}$ steps 
until it returns to distance $d$. Therefore, 
$E[Y] =(\frac{q_d}{1-p_d}) (u_{d+1} + 1)+ (\frac{1-q_d-p_d}{1-p_d}) 1$. 
The probability of transitioning from $d$ to $d-1$ is $p_d$, so $E[X]=\frac{1}{p_d}$.
Since $X$ and $Y$ are independent, $E[Z]=E[X]\times E[Y] + 1$, so:

\begin{eqnarray}
u_{xv} &=& E[Z] = 1+\left(\frac{q_d(u_{d-1} + 1)}{1-p_d} + \frac{1-q_d-p_d}{1-p_d} \right) (\frac{1}{p_d})      \nonumber \\
&=& \frac{q_du_{d-1}}{p_d} +\frac{1}{p_d} = u_d   \nonumber 
\end{eqnarray}

The expected hitting time $h_{xv}$ is the sum of the expected number
of steps in each unit progression from $x$ towards $v$: 
\begin{eqnarray} 
h_{xv}= \sum_{1 \leq i \leq d} u_i = h_{x'v} \label{eq:chain}
\end{eqnarray}

\end{proof}



%\begin{proof}
%Let $D=d_G(v)$.
%For any two vertices $k, k' \in V$ let $f_{kk'}$ be the expected number of steps in a random
%walk that starts from $k$ and reaches for the first time a vertex $k''$ such that $d(k'',v)=d(k,v)-1$.
%By induction on $d$, we show that $f_{xv}=f_{x'v}=u_d$, where for $d < D$
%\begin{eqnarray} 
%u_d = \frac{q_d}{p_d}u_{d+1}+\frac{1}{p_d} \nonumber  
%\end{eqnarray}
%and
%\begin{eqnarray} 
%u_D = \frac{1}{p_D}   \nonumber 
%\end{eqnarray}
%
%For $d=D$, we have $d_G(x,v)=D$, therefore \\ 
%
%\begin{eqnarray} 
%f_{xv} = \sum_{0 \leq N}(1-p_D)^{N-1}p_D = \frac{1}{p_D} = u_D.   \nonumber  
%\end{eqnarray}
%
%Now suppose that Lemma \ref{lem:weak} holds for $d+1$, we wish to show that it also holds for $d$.
%Let $X$, $Y$, and $Z$ be three random variables that respectively determine the number of times the random walk visits a state with
%goal distance $d$ before reaching the goal distance $d-1$, the number of steps between two consecutive such visits, 
%and the total length of the random walk. Then, 
%
%\begin{eqnarray} 
%Z=X \times Y.
%\end{eqnarray}
%Since, $X$ and $Y$ are independent:
%\begin{eqnarray}
%E[Z]&=&E[X]*E[Y]. \nonumber \\
%\end{eqnarray}
%After each visit of the goal distance $d$, except the last one, after which the random walk reaches
%the goal distance $d-1$, the random walk either with probability $\frac{q_d}{1-p_d}$
%transitions to a vertex $v'$ with the goal distance $d+1$  or with probability $\frac{1-q_d-p_d}{1-p_d}$ 
%transitions to a vertex with goal distance $d$.
%In the former case, the random walk on average performs $f_{v'v} =u_{d-1}$ steps until it returns to a vertex with goal distance $d$.
%
%\begin{eqnarray}
%E[Y] &=&\left(\frac{q_d(u_{d-1} + 1)}{1-p_d} + \frac{1-q_d-p_d}{1-p_d} \right)     \nonumber \\
%f_{xv} =E[Z] &=& 1+\left(\frac{q_d(u_{d-1} + 1)}{1-p_d} + \frac{1-q_d-p_d}{1-p_d} \right) \sum_{0 \leq N}N(1-p_d)^Np_d      \nonumber \\
%&=& 1+ (q_du_{d-1} +1-p_d)\sum_{0 \leq N}N(1-p_d)^{N-1}p_d      \nonumber \\
%&=& 1+ (q_du_{d-1} +1-p_d)(\frac{1}{p_d})   \nonumber \\
%&=& \frac{q_du_{d-1}}{p_d} +\frac{1}{p_d}   \nonumber \\
%&=& u_d   \nonumber 
%\end{eqnarray}
%
%Finally,  
%\begin{eqnarray} 
%h_{xv}= \sum_{1 \leq i \leq d} u_i = h_{x'v} \label{eq:chain}
%\end{eqnarray}
%
%\end{proof}


\begin{mythe} \label{thr:FWH}
Let $G=(V,E)$ be a FWH graph, and $v \in V$.
Let $p_i = pc_G(v,i)$, $q_i=rc_G(v,i)$ and $d_G(v) = D$.
Then for all $x \in V$, 
%\begin{equation} 
%\begin{split}
%h_{xv} = &  \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{d_G(v)-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_{d_G(v)}} +  \right. \\  
%& \left. \sum^{d_G(v)-1}_{j=d}\left(\frac{1}{p_j}\prod^d_{i=j+1}\frac{q_i}{p_i}\right)\right) \nonumber
%\end{split}
%\end{equation}
\begin{eqnarray} 
h_{xv} = \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)\right) \nonumber
\end{eqnarray}
\end{mythe}

%\begin{mythe} \label{thr:main}
%Let $G=(V,E)$ be a fair weakly homogeneous graph. Let $v \in V$ be a vertex.
%Let $p_i = pc_G(v,i)$ and $q_i=rc_G(v,i)$. 
%For any vertex $x$, 
%\begin{eqnarray} 
%h_{xv} = \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{d_G(v)-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_{d_G(v)}} + \sum^{d_G(v)-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)\right) \nonumber
%\end{eqnarray}
%
%\end{mythe}

\begin{proof}
According to Lemma \ref{lem:FWH},
\begin{eqnarray} 
h_{xv} &=& \sum_{1 \leq i \leq d} u_i  \nonumber \\
u_d &=& \frac{q_d}{p_d}u_{d+1}+\frac{1}{p_d} \quad (d < D) \nonumber \\ 
u_D &=& \frac{1}{p_D}  \nonumber 
\end{eqnarray}
Now by induction on $d$ we show that for $d<D$ 
\begin{eqnarray} 
u_d=\left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right) \label{eq:distance} \nonumber
\end{eqnarray}
This is trivial for $d=D-1$. Assume for $d+1$ Equation \ref{eq:distance} holds. Then,
\begin{eqnarray} 
u_d &=& \frac{q_d}{p_d}\left(\left(\prod^{D-1}_{i=d+1} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d+1}\left(\frac{1}{p_j}\prod^{j-1}_{i=d+1}\frac{q_i}{p_i}\right)\right)+\frac{1}{p_d}  \nonumber  \\
&=& \left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \frac{q_d}{p_d}\sum^{D-1}_{j=d+1}\left(\frac{1}{p_j}\prod^{j-1}_{i=d+1}\frac{q_i}{p_i}\right)+\frac{1}{p_d}     \nonumber \\
&=& \left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d+1}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)+ \frac{1}{p_d}\prod^{d-1}_{i=d} \frac{q_i}{p_i}   \nonumber \\
&=& \left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right) \nonumber
\end{eqnarray}
According to Equation \ref{eq:chain}:
\begin{eqnarray} 
h_{xv} = \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)\right) \nonumber
\end{eqnarray}
\end{proof}

In analogy with homogenous graphs, 
the largest goal distance $D$ and the regress factors $q_i/p_i$ are the main determining factors
for the expected runtime of random walks in weakly homogenous graphs. 

\subsection{Example domain: One-handed Gripper}

\begin{table}[tp]%
\centering
\begin{tabular}[t]{ cclcccc }
Cat.   &  room   &  hand & $pc$ & $rc$ & $rf$ & $b$\\
\vspace{0.1cm}
$1$  &  $A$   &  full       & $\frac{1}{2}$ & $\frac{1}{2}$ & 1 & 1 \\
\vspace{0.1cm}
$2$  &   $A$   & empty & $\frac{|A|}{|A| + 1}$ & $\frac{1}{|A| + 1}$ & $\frac{1}{|A|}$ & $|A|$\\
\vspace{0.1cm}
$3$  &  $B$   &  full       & $\frac{1}{2}$ & $\frac{1}{2}$ & 1& 1\\
\vspace{0.1cm}
$4$  &  $B$   &  empty & $\frac{1}{|B| + 1}$ & $\frac{|B|}{|B| + 1}$ & $|B|$ & $|B|$\\
\end{tabular}
\caption{Structural properties of One-handed Gripper. Room specifies the
location of the robot. $|A|$ and $|B|$ denote the number of
balls in A and B.}
%\martin{also list rf and b, they are used in the text}
\label{table:gripper}
\end{table}

Consider a one-handed gripper
domain, where a robot must move $n$ balls from room A to room B, 
by using the actions of
picking up a ball, dropping its single ball, or moving to the other room. 
The states of the search space fall into four categories shown in Table \ref{table:gripper}.
The search space is fair weakly homogenous: any two states with
the same goal distance have the same distribution of
balls in the rooms and also belong to the same category. The graph is fair
since no action can change the goal distance by more than one. Therefore,
Theorem \ref{thr:FWH} can be used to compute the expected hitting time. 

\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{RESOURCES/gripper.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:gripper} The Average number of generated states varying the number of balls (x-axis) in Gripper domain.}
\vspace{-0.2cm}
\end{figure}

Figure \ref{fig:gripper} plots the predictions of Theorem \ref{thr:FWH} together with
the results of a scaling experiment, varying $n$ for both random walks and greedy best first search. 
To simulate the behavior of both algorithms in plateaus with a lack
of heuristic guidance, a blind heuristic is used,
which returns 1 for any non-goal state and 0 for goals. 
Search stops as soon as it finds a state with a heuristic value lower than
that of the initial state. Because of the blind heuristic, the only such state is the goal state. 
%\martin{I guess there can be more than one goal state in theory, but in practice there is only one since the robot will be
% in room B.}
The prediction matches the experimental results extremely well. 
Random walks outperform Greedy best first search.
The reason 
%\martin{this sounds vague. why do we not have a solid analysis?}
is that while in most of the states
the regress factor is almost equals the effective branching factor of GBFS in
almost $\frac{1}{4}$ of all the states, all the states in category 2 of Table \ref{table:gripper}, 
the regress factor is significantly
smaller than the effective branching factor. 
%\martin{I still do not understand this text. What is 1/4 of what? What is the reason?}
%Therefore, a better performance of random walks compared to systematic 
%search is expected. 

\section{Biased Action Selection for Random Walks}
Regress factors can be changed by biasing the action selection in the random walk. 
One very natural bias is to use a two-level scheme, where first an action type
is selected uniformly randomly, then the chosen action is grounded in a second step.
In the gripper domain, there are three types of actions - pick up, drop and move. In the 
case of pick up, one of the balls in the same room is selected uniformly at random in the second step. 

When using this biased selection, the search space is fair homogenous with $q=p=\frac{1}{2}$. 
The experimental results and theoretical prediction for
such walks are included in Figure \ref{fig:gripper}. The hitting time 
grows only linearly with $n$ here. It is interesting that this natural
way of biasing random walks
is able to exploit the symmetry inherent in the gripper domain. 

\section{Restarting Random Walks}
The restarting random walk model used here is a random walk which
\textit{restarts} from a fixed initial
state $s$ with probability $r$ at each step,
and uniformly randomly chooses among neighbour states with probability $1-r$. 

\begin{mydef}[Restarting Random Walk]
Let $G=(V,E)$ be a graph, $s \in V$ the initial state, and $ r \in [0, 1]$. 
A restarting random walk $RRW(G, s, r)$
is a Markov chain $M_G$ with states $V$ and transition probability
$p_{uv}$ for $u, v \in V$ of:  
\begin{align*}
 p_{uv}=  
  \begin{dcases}
   \frac{1-r}{|N_G(u)|}&  \text{if } (u,v) \in E, v \neq s \\
   r + \frac{1-r}{|N_G(u)|}&  \text{if } (u,v) \in E, v = s \\ 
   0 &  \text{if } (u,v) \notin E, v \neq s \\ 
   r &  \text{if } (u,v) \notin E, v = s \\ 
  \end{dcases} 
\end{align*}
\end{mydef}

Infinitely Regressable Homogenous (IRH) graphs generalize FH graphs by relaxing the fairness condition.

\begin{mydef}[Infinitely Regressable Homogenous Graph]

For $v \in V$, $G$ is infinitely regressable (IR) $v$-homogeneous iff 
there exist two real functions 
$pc_G(x, d)$, $sc_G(x, d)$, and  $irc_G(x, d)$ 
mapping the domain $V\times \{0, 1, \dots, d_G(v)\}$ to the range $[0, 1]$ such that for any two vertices $u,x \in V$ 
if $d_G(u,v)=d_G(x,v)$, then the following three conditions hold:
\begin{enumerate}
\item $sc(u,v) = sc(x,v) =  sc_G(v, d_G(u,v))$.
\item If $d_G(u,v) \neq 0$ then $pc(u,v) = pc(x,v) = pc_G(v, d_G(u,v))$.
\item $irc_G(u,v) = irc(x,v) = irc(v, d_G(u,v))$.
\end{enumerate}
G is IRH iff for any $v \in V$ it is IR $v$-homogeneous.  
The functions $pc_G(x, d)$, $sc_G(x,d)$ and $irc_G(x, d)$ are respectively called
the progress chance, the regress chance and the infinite regress chance of $G$ regarding $x$. 
\end{mydef}

\hootan{The following lemma can be easily extended to weakly IRH. But since the next theorem is not proved for weakly IRH this 
Lemma is also stated for IRH.}

\begin{mylem} \label{lem:IRH}
Let $G=(V,E)$ be an IRH graph. Let $RRW(G, s, r)$ be a restarting random walk and $k$ be the expected number of steps that the random walk
spends in a deadend before restarting from the initial state. 
Then, for all $v, x,x' \in V$  with $d_G(x,v)=d_G(x',v)=d$ and $d <= d_G(s,v)$, $h_{xv}=h_{x'v}$.
\end{mylem}

\begin{proof}
Let $D=d_G(s, v)$, $p=pc_G(v)$, $c=sc_G(v)$, and $i=irc_G(v)$.
Let $u$ be the unit progress time. 
Similar to Lemma \ref{lem:FWH} by induction on the goal distance $d$, we show that $u_{xv}=u_{x'v}$.

%For $i=d_G(s,v)$, before reaching the goal distance $i-1$ 
%the random walk is always at a state with goal distance $i$.
%The probability of progressing towards
%$v$ is $p_D$, so on average $\frac{1}{p_D}$ steps are needed
%to progress, and $u_{xv}=u_{x'v}= \frac{1}{p_D}=u_d$.

Suppose the lemma holds for $d+1$. To show that it also holds for $d$,
let $X_d$, $Y_d$, and $Z_d$ be three random variables that respectively 
measure the number of times the random walk visits a state with
goal distance $d$ before reaching the goal distance $d-1$, the number of steps between two consecutive such visits, 
and the total length of the random walk. Then, $Z_d=X_d \times Y_d + 1$ and since $X_d$ and $Y_d$ are independent, $E[Z_d]=E[X_d]\times E[Y_d] + 1$.

Since, the probability of progress equals $(1-r)p$ therefore, for all the nodes in the graph $E[X]=\frac{1}{(1-r)p}$ (the expected value
of a geometric distribution with the success probability $(1-r)p$).

by using induction it can be shown $E[Y_d]$ is the same for all the nodes with the same goal distance $d$. 
Call the last step at distance $d$, before progressing to $d-1$, a \textit{successful $d$-visit}, and all previous
visits, which do not immediately after reach $d-1$, \textit{unsuccessful  $d$-visits}.
After each unsuccessful  $d$-visit, one of the following three cases happen for the random walk: 

\begin{itemize}
\item with probability $r$ restarts from $s$ and after on average $\sum_{i=d+1}^{D}u_i$ steps performs another $d$-visit.
\item with probability $(1-r)i$ transitions to a deadend and after on average $k + \sum_{i=d+1}^{D}u_i$ performs another $d$-visit.
\item with probability $(1-r)c$ stalls at distance $d$.
\end{itemize}
Therefore, 

\begin{eqnarray}
E[Y_d] &=& r\sum_{i=d+1}^{D}u_i + (1-r) \left(1+ c + ik + i\sum_{i=d+1}^{D}u_i \right) \nonumber
\end{eqnarray}

Thus, two states with the same goal distance $d$ have the same $E[Y_d]$. 
For the base case $d=D$, the same three cases happens with the exception that after restarting the
random walk immediately performs the $d$-visit:

\begin{eqnarray}
E[Y_D] &=& (1-r) \left(1+ c + ik  \right) \nonumber
\end{eqnarray}
\end{proof}


\begin{mythe} \label{thr:IRH}
Let $G=(V,E)$ be a IRH graph, $v \in V$, $p=pc_G(v)$, $c=sc_G(v)$, and $i=irc_G(v)$.
Let $R = RRW(G, s, r)$ and $k$ be the expected number of steps that $R$
performs in a deadend before restarting from $s$.
The hitting time $h_{sv} = \Theta\left(\beta\lambda^{d-1}\right)$, where $\beta= \frac{ik+1}{p}$ and $\lambda=\frac{i}{p}+\frac{r}{(1-r)p}+1$.
%$\lambda=\frac{1-c+cr}{(1-r)p}$.
\end{mythe}

%\todo{Check the proof. It seems flawed.}
\begin{proof}
Let $d=d_G(s,v)$. According to the Markov property of random walks \cite{Norris}, \\
\begin{eqnarray}
h_0 &=& 0 \nonumber \\
h_x &=& (1-r)\left(i(k + h_d) + ph_{x-1} + ch_x + 1\right) + rh_d \nonumber 
%h_d &=& p_{xv} \nonumber \\
\end{eqnarray}

Let $u_x = h_x - h_{x-1}$ then
\begin{eqnarray}
u_x &=& (1-r)\left(ph_{x-1} - ph_{x-2} + ch_{x} - ch_{x-1} \right)  \nonumber \\
        &=& (1-r)(pu_{x-1} + cu_{x} )  \nonumber \\
       &=& \frac{(1-r)p}{1-c+cr}u_{x-1}  \nonumber \\
\end{eqnarray}
Since $c=1-p-i$
\begin{eqnarray}
u_x &=& \frac{(1-r)p}{i(1-r)+p(1-r)+r}u_{x-1}  \nonumber \\
       &=& \lambda^{-1}u_{x-1}  \nonumber \\
\end{eqnarray}

For $x<d$,
\begin{eqnarray}
%u_x &=& \prod_{i=x}^{N-1}(1-pr(i, u)) 
u_x &=& \lambda ^{d-x} u_{d}  \nonumber \\
h_x &=&  \sum_{i=1}^{x}u_i \nonumber \\
&=& u_{d}  \sum_{i=1}^{x} \lambda ^{d-i} \nonumber \\
&=& \lambda ^{d-x} (\frac{\lambda^{x}-1}{\lambda-1}) u_{d}  \nonumber 
\end{eqnarray}
The value $u_d$ is the progress time from the goal distance $d$. Therefore, 
\begin{eqnarray}
u_d &=& (1-r)\left(cu_d + i(k + u_d) + 1\right) + ru_d  \nonumber \\
        &=& \left(r+ c(1-r) + i(1-r)\right) u_d + ik(1-r) + (1-r) \nonumber \\
        &=& \left(r+ (1-r)(1-p)\right) u_d + (ik +1) (1-r) \nonumber \\
       &=& \frac{ik+1}{p}  \nonumber \\       
       &=& \beta \nonumber        
\end{eqnarray}




Therefore, 
\begin{eqnarray}
h_d &=& u_d + h_{d-1} \nonumber \\
h_d &=& \beta + \beta\lambda (\frac{\lambda^{d-1}-1}{\lambda-1}) \nonumber \\
h_d &\in& \Theta\left(\beta\lambda^{d-1}\right) \label{eq:IRH}
\end{eqnarray}

\end{proof}


%\commentout{ 
%\begin{mylem}
%Let $G=(V,E)$ be a fair homogeneous graph, and $h$ be the hitting time of a restarting random walk $RRW(s,r)$. 
%Then, for all $v,x,x' \in V$  with $d_G(x,v)=d_G(x',v)=d$ and $d < d(s, v)$, $h_{xv}=h_{x'v}$.
%\end{mylem}
%
%
%
%
%%\begin{proof} 
%%Let $p=pc_G(v)$ and $q=rc_G(v)$ be the progress and regress chances in $G$. Let $r$ be the restarting probability and
%%$s$ be the starting vertex. Let $pr(k, n, s, s')$ be the probability of reaching goal after $n$ random walks such that the first random
%%walk starts from $s'$, the $n-1$ next random walks start from $s$, and the total length of all $n$ random walks equals $k$.
%%Let $d_{s'} = d_G(s', v)$ and $d_{s} = d_G(s, v)$.
%%Then, 
%%\begin{eqnarray} 
%%h_{s'v} = \sum_{0 \leq k} k \sum^{k-d_{s}+1}_{n=1} pr(k,n,s,s')(1-r)^{k-n+1}r^{n-1}    \label{eq:h_restart}
%%\end{eqnarray}
%%The probability $pr(k,n,s,s')$ can be defined by the following recurrence equations:
%%\[
%% pr(k,n,s,s')=  
%%  \begin{dcases}
%%   \sum^{k}_{l=d_s} \left(1-pr(k-l, n-1, s, s')\right) pr(l,1,s, s)&  \text{if } n \geq 2 \\
%%    \sum^{k}_{i=0} {k \choose{i+d_{s'}, i, k-2i-d_{s'}}} p^{i+d_{s'}} q^{i} \left(1-p-q\right)^{N-2i-d_{s'}}&  \text{if } n =1
%%  \end{dcases}
%%\]
%%
%%According to Equation \ref{eq:h_restart}, $h_{uv}=h_{xv}$ iff $pr(k, n, s, x)=pr(k, n, s, u)$. 
%%This can be proved by induction on $n$; 
%%
%%\begin{eqnarray} 
%%pr(k,1, s, x) &=& {k \choose{i+d_{x}, i, k-2i-d_{x}}} p^{i+d_{x}} q^{i} \left(1-p-q\right)^{N-2i-d_{x}} \nonumber \\
%%&=& {k \choose{i+d_{u}, i, k-2i-d_{u}}} p^{i+d_{u}} q^{i} \left(1-p-q\right)^{N-2i-d_{u}} \nonumber \\
%%&=& pr(k,1,s, u)
%%\end{eqnarray} 
%%Let us suppose the theorem holds for $n-1$, and $n\geq 2$. Then, 
%%\begin{eqnarray} 
%%pr(k, n, s, x) &=& \sum^{k}_{l=d_s} \left(1-pr(k-l, n-1, s, x)\right) pr(l,1,s, x) \nonumber \\
%%&=& \sum^{k}_{l=d_s} \left(1-pr(k-l, n-1, s, x)\right) pr(l,1,s, x) \nonumber \\
%%&=& pr(k,n,s,u) \nonumber \\
%%\end{eqnarray} 
%%\end{proof}
%
%\begin{mythe}
%Let $G=(V,E)$ be a homogeneous graph, $v \in V$, $p=pc_G(v)$, and $q=rc_G(v)$.
%Let $R=RRW(G, s, r)$. The hitting time $h_{sv} \in O\left(\left(\frac{q}{p}+ \frac{r}{p(1-r)}+1\right)^{d(s,v)+1} \left(\frac{q+r}{pr}\right)\right)$.
%\end{mythe}
%
%\begin{proof}
%Let $d=d_G(x,v)$. According to the Markov property of random walks, \\
%\begin{eqnarray}
%h_0 &=& 0 \nonumber \\
%h_x &=& \left(qh_{x+1}+ph_{x-1} + (1-p-q)h_x + 1\right)(1-r) + rh_d \nonumber \\
%%h_d &=& p_{xv} \nonumber \\
%\end{eqnarray}
%
%Let $u_x = h_x - h_{x-1}$ then for $x < d$,
%\begin{eqnarray}
%u_x &=& \frac{1}{p} + \frac{q}{p}u_{x+1} + \frac{r}{p(1-r)} \sum^{d}_{i = x+1}u_i \nonumber \\
% &<& \frac{1}{p} + \left(\frac{q}{p}+ \frac{r}{p(1-r)}\right) \sum^{d}_{i = x+1}u_i \nonumber \\
%\end{eqnarray}
%
%Let $M = \left(\frac{q}{p}+ \frac{r}{p(1-r)}\right)$. Now by induction on $x$ we prove that for $ 0 \leq x \leq d-1$\\ 
%\begin{eqnarray}
% \frac{1}{p} + M \sum^{d}_{i = x+1}u_i = (Mu_d+ \frac{1}{p})\left((M+1)^{d-x-1} \right) \label{eq:restart_main}  \\
%\end{eqnarray}
%
%For $x=d-1$, we have:
%\begin{eqnarray}
% \frac{1}{p} + M \sum^{d}_{i = d}u_i &=& (Mu_d+ \frac{1}{p}) \left((M+1)^{d-(d-1)-1} \right) \nonumber  \\
% \frac{1}{p} + M u_d &=& M u_d+ \frac{1}{p} \nonumber  \\
%\end{eqnarray}
%
%
%Assume Equation \label{eq:restart_main} holds for $x+1$ then,\\
%\begin{eqnarray}
% \frac{1}{p} + M \sum^{d}_{i = x+1}u_i  &=&  \frac{1}{p} + M u_d + M \sum^{d-1}_{i = x+1} \left((M u_d + \frac{1}{p}) (M+1)^{d-i-1}\right) \nonumber \\
% &=&  \frac{1}{p} + M u_d + M (M u_d + \frac{1}{p}) \sum^{d-1}_{i = x+1} (M+1)^{d-i-1}     \nonumber \\
% &=&  \frac{1}{p} + M u_d + M (M u_d + \frac{1}{p}) \left(\frac{(M+1)^{d-x-1} -1}{M}\right)    \nonumber \\
% &=&  (Mu_d+ \frac{1}{p})\left((M+1)^{d-x-1} \right) \nonumber 
%\end{eqnarray}
%
%Therefore, \\
%\begin{eqnarray}
%u_x &<&  (Mu_d+ \frac{1}{p})\left((M+1)^{d-x-1} \right) \nonumber 
%\end{eqnarray}
%
%Additionally, \\
%
%\begin{eqnarray}
%h_d &=& \sum^d_{i=1}u_i \nonumber \\
% &<& u_d + \sum^{d-1}_{i=1} (Mu_d+ \frac{1}{p})\left((M+1)^{d-i-1} \right) \nonumber \\
%&<& u_d + (Mu_d+ \frac{1}{p})\left(\frac{(M+1)^{d-1} -1}{M} \right)
%\end{eqnarray}
%
%Furthermore, 
%\begin{eqnarray}
%u_d &=& q(u_{d+1}+u_d)+ (1-p-q)u_d+1 \nonumber \\
% &=& \frac{q}{p} u_{d-1} + \frac{1}{p} \nonumber \\
%\end{eqnarray}
%
%Since a random walk on average restarts after $\frac{1}{r}$ steps, $u_{d-1} \leq \frac{1}{r}$. Therefore, $u_d < \frac{q+r}{pr},$ and 
%
%\begin{eqnarray}
%h_d &<& (M+1)^{d+1} \left(\frac{q+r}{pr}\right) \nonumber \\
%\end{eqnarray}
%\end{proof}}

\begin{mythe}\label{thr:IRH_BOUND}
Let $G=(V,E)$ be a homogeneous graph, $v \in V$, $p=pc_G(v)$ and $q=rc_G(v)$.
Let $R=RRW(G, s, r)$. The hitting time $h_{sv} \in O\left(\beta\lambda^{d-1}\right)$, where 
$\lambda=\left(\frac{q}{p}+ \frac{r}{p(1-r)}+1\right)$, $\beta=\frac{q+r}{pr}$ and $d=d_G(s,v)$.
\end{mythe}

\begin{proof}
For any goal distance $x$, $h_x \leq \frac{1}{r} + h_d$. This is because the random walk
on average restarts from $s$ after $\frac{1}{r}$ steps. The right hand side of this inequality
is the hitting time of a random walk stuck in a deadend with infinite size.
Therefore, with the assumption that each time the random walk regresses from the goal the walk is in a deadend we can
obtain an upper bound for a homogenous graph using the theorem for IRH graphs. It is enough
to simply replace $i$ with $q$ and $k$ with $\frac{1}{r}$ in Equation \ref{eq:IRH}.

\end{proof}


Comparing the exponential terms $\lambda^{d-1}$ and $(\frac{q}{p})^D$ in the hitting time of RW and RRW,
Equations \ref{eq:IRH} and \ref{eq:homos}, 
the base of the exponential term for RRW 
equals the regress factor, the base of the exponential term for RW, plus $\frac{r}{p(1-r)} + 1$, which might
translate to larger hitting times for RRW.
However, by choosing a small enough restarting probability $r$, $\frac{r}{p(1-r)} + 1$ can be reduced.
% \martin{how close???pretty?meaning?} to the regress factor. 
The main advantage of RRW over simple random walks
is that the exponent of the exponential term is reduced from $D$ to $d(s,v)-1$,
which can make a huge difference especially when $d(s,v)$ is small. 
However, RRW is predicted to be a bit weaker when $v$ is ``far away'', so that $d(s,v)$
is close to $D$. 

\subsection{A Grid Example}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth, height=0.26\textheight ]{RESOURCES/grid.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:grid} The Average number of generated states varying 
  the goal distance of the starting state (x-axis) and the restart rate in the Grid domain.}
\vspace{-0.2cm}
\end{figure}

Figure \ref{fig:grid} shows the results of RRW with restart rate $r \in \{ 0, 0.1, 0.01, 0.001 \}$
in a variant of the Grid domain. This domain features an $n\times n$ grid with a robot 
that needs to first pick up a key at location $(n,n)$, then unlock a door at
$(0,0)$. 
To make the problem nontrivial, with regress factors larger than 1,
the robot can only move in at most three directions in most of the grid: left, up or down. 
Only in the top row, the robot is allowed to move right (but not up).

In this domain, all states before the robot picks up the key share the same $h_{FF}$ value. 
Figure \ref{fig:grid} shows the average number of states generated until the subgoal of
picking up the key is reached, with the robot starting from different goal
distances plotted on the x-axis. Since the regress factors
are not uniform in this domain, Theorem \ref{thr:IRH_BOUND} does not apply directly. 
Still, comparing the results of RRW for different $r>0$ with
simple random walks where $r=0$, the experiment confirms the high-level predictions of
Theorem \ref{thr:IRH_BOUND}: 
\begin{itemize}
\item RRW generates slightly more states, at most two times more, 
than simple random walks when the initial goal distance is large, $d \geq 14$, and $r$ is small enough.
\item RRW is much more efficient when $d$ is small; for example it generates three orders of magnitude 
fewer states for $d=2$, $r=0.01$.    
\end{itemize}

\section{Extension to Bounds for Other Graphs}
While many planning problems cannot be exactly modelled as 
FH, FWH, or IRH  graphs, these models
can still be used to obtain upper bounds on the hitting time in any graph $G$. 
Consider a corresponding IRH graph $G'$ 
with progress chances at each goal distance $d$ set to 
the minimum progress over all nodes at goal distance $d$ in $G$.
Then the hitting times for $G'$ %\martin{you had G here but I think it is G'} 
will be an upper bound for the hitting times in $G$.
In $G'$, progressing towards the goal is at most as probable as in $G$.
%\begin{mydef}[Strongly Connected Graph]
%The directed graph $G=(V,E)$ is strongly connected if and only if for two vertices $u,v \in V$ there exists a path
%from $u$ to $v$. An alternative way to define a strongly connected graph is to say that for any two vertices $u,v \in V$, $pc(u,v) > 0$.
%\end{mydef}

\hootan{The next theorem is stated for weakly IRH.}
\begin{mythe} \label{thr:bound}
Let $G=(V,E)$ be a directed graph, $s, v \in V$, and $D=d_G(v)$. Let $p_{min}(d)$ be the minimum
progress chance over all nodes at the distance $d$ of $v$. Let $G'=(V', E')$ be an IRH graph such that $pc_{G'}(v',d)=p_{min}(d)$ 
where $v' \in V'$ and $0 \leq d \leq D$. Given $s' \in V'$, let $R=RRW(G, s, r)$ and $R'=RRW(G', s', r)$.
The hitting time of $R$, $h_{sv}$, is upper bounded by the hitting time of $R'$, $h_{s'v'}$, i.e., if $d_G(s, v) = d_{G'}(s', v')$, then $h_{sv} \leq h'_{s'v'}$.
\end{mythe}

%\begin{proof}

%\end{proof}
%\commentout{
\begin{proof}
\todo{First give a high level description of the proof}
\hootan{This theorem is very intuitive but I couldn't give a simpler proof. However we might get away with much simpler
proof using more English and intuitive explanations.}

Let $S= X_D, X_{D-1}, \dots, X_1$ be a sequence of random variables where $X_n$, $1 \leq n \leq D$, denotes the first
vertex reached by $R$ at the goal distance $n$. Since 
\begin{eqnarray}
P(X_n = i_n | X_{n+1} = i_{n+1}, \dots, X_{D} = i_{D}) = P(X_n = i_n | X_{n+1} = i_{n+1}) \nonumber 
\end{eqnarray}
$S$ is a Markov process. Let $R(S)$ be the set of all the possible sequences. Since is a Markov process, 
the expected length of the random walk given a sequence $I \in R(S)$, $H_I =  \sum_{n=1}^D u(i_n)$ where $i_n$ is the value of $X_n$ in $I$, and $u(i_n)$ is the expected unit progress time for the vertex $i_n$. Furthermore $h_{sv} = \sum_{I \in R(S)} P(I) H_I$. 

Let $V_n = \{x | x \in V \wedge d_G(x,v)=n \}$, and 
assume for all $x_n \in V_n$, $u(x_n) \leq u'_n$ where $u'_n$ is the progress time at the distance $n$ of $v'$. Then 

\begin{eqnarray}
H_I &\leq&  \sum_{n=1}^D u_n \nonumber \\
h_{sv} &\leq& \sum_{I \in R(S)} P(I) \sum_{n=1}^D u'_n \nonumber \\
h_{sv} &\leq& \sum_{n=1}^D u'_n \sum_{I \in R(S)} P(I) \nonumber \\
	   &\leq& \sum_{n=1}^D u'_n \label{eq:h_inequality} \nonumber \\
\end{eqnarray}

and since $h_{s'v'}=h_d=\sum_{i=1}^D u'_d$ (Lemma \ref{lem:IRH}), $h_{sv} \leq h_{s'v'}$. Therefore, 
to prove theorem it is enough to prove the assumption above. Lemma \ref{lem:IRH} shows
\begin{eqnarray}
u'_n &=& E[Y_n']  E[X_n'] + 1 \label{eq:bound1} \\
E[X_n'] &=& \frac{1}{p_{min}(d)}  \nonumber \\
E[Y_n'] &=& r\sum_{n=d+1}^{D}u'_n + (1-r) \left(1+ c + ik + i\sum_{i=d+1}^{D}u'_n \right) \nonumber 
\end{eqnarray}

Analogous to Equation \ref{eq:bound1}, for all $x_n \in V_n$, $u(x_n) = E[Y_n] E[X_n] + 1$ where $X_n$ and $Y_n$ are random variables
that respectively measure the number of unsuccessful $n$-visits and the number of 
steps between two consecutive unsuccessful $n$-visits in $G$. Therefore, to show $u(x_n) \leq u'_n$ it is enough
to show 
\begin{eqnarray}
E[X_n] &\leq& E[X_n'] \label{eq:u_inequality1}\\
E[Y_n] &\leq& E[Y_n'] \label{eq:u_inequality2}
\end{eqnarray}

Let $P_{i,n}$ be a random variable denoting the progress chance in the $i$th $n$-visit.
In the worst case where for all $0 \leq i$, $P_{i,n} = p_{min}(n) $ the expected unsuccessful $n$-visits 
is $\frac{1}{p_{min}(n)}$ (geometrical distribution with
success rate $p_{min}(n)$). Since increasing the progress chances (success chances)
can only decrease the expected unsuccessful $d$-visits, $E[X_n] \leq \frac{1}{p_{min}(n)}$. Therefore, 
Inequality \ref{eq:u_inequality1} holds. Inequality \ref{eq:u_inequality2} can be proved by using induction. 
For $n = D$,
\begin{eqnarray}
E[Y_D'] &=& r+ (1-r) \left(1+ c + ik \right) \nonumber \\
 	    &=& E[Y_D] \nonumber
\end{eqnarray}

Suppose the inequality holds for $D, D-1, \dots, n+1$. Let $\omega_{n}$ be the expected number of steps in RRW
that starts from $s$ and ends in vertex at the goal distance $n$. 
\begin{eqnarray}
E[Y_n] &=& r\omega_n + (1-r) \left(1+ c + ik + i\omega_n \right) \nonumber
\end{eqnarray}

For $n < d \leq D$ both Inequalities \ref{eq:u_inequality1} and \ref{eq:u_inequality2} (the induction assumption)
hold, therefore for any  $n < d \leq D$ and $ x \in V_d$ $u(x) \leq u'_d$ and, analogous to Inequality \ref{eq:h_inequality}, 
$\omega \leq \sum_{i=n+1}^D u'_i$. Therefore, 

\begin{eqnarray}
E[Y] &\leq& r\sum_{i=d+1}^{D}u'_i + (1-r) \left(1+ c + ik + i\sum_{i=d+1}^{D}u'_i \right) \nonumber \\
        &\leq& E[Y'] \nonumber
\end{eqnarray}
\end{proof}

\section{Related Work}

\todo{Give a more comprehensive review of the previous work}
Random walks have been extensively studied in many different scientific fields such as 
physics, finance, and computer networking \cite{rw_network,rw_finance,rw_supply}.
Discrete and continuous random walks are well studied
\cite{Norris,Aldous,Yin,pardoux}. 
The standard approach to find the hitting time in a graph is to write the linear equations
for the hitting times as in Equations \ref{eq:example1} and \ref{eq:example2}, and solve 
them by linear algebra. In contrast, the techniques used 
in this paper mainly build on  methods for finding the hitting time of 
simple chains such as birth--death, and gambler chains \cite{Norris}. The advantage of
these methods is that solutions can be expressed easily as functions
of chain features. 

Studying the properties of random walks on finite graphs has a very long history
surveyed in \cite{lovasz}. 
One of the most relevant results is that the hitting time of a random walk in an 
undirected graph with $n$
nodes is $O(n^3)$ \cite{Brightwell}. 
%This is the best known upper-bound for the hitting time in 
%a general graph. 
However, this result does not explain the strong performance of random walks 
in planning search spaces which grow exponentially with the number of objects. 
Despite the rich existing literature on random walks, the application to 
the analysis of random walk planning seems to be novel. 

\section{Conclusion}


%\section{Examples}

%\subsection{Logistics}
%Consider a transportation domain in which one or more trucks are used to move packages between different locations.
%\subsection{Gripper}


%Induction on the interger $0\leq i \leq d(x,v)$: if $i = d(x,v)$, then $U_{d(x,v)} = \{x\}$ and $p_{xv} \leq p_{xv}$. 
%For the induction step, let $u \in U_i$. According to Markov property:
%\begin{eqnarray}
%p_{uv}= \frac{r+d(1-r)}{p(1-r)}(\frac{\sum_{k \in U_D}{h_k}}{|U_D|} + \frac{\sum_{k \in U_{D-1}}{h_k}}{|U_{D-1}|} \dots + \frac{\sum_{k \in U_{i+1}}{h_k}}{|U_{i+1}|} )+ \frac{q}{p} (\frac{\sum_{k \in U_{i+1}}{h_k}}{|U_{i+1}|}) +\frac{d+r}{pr} \nonumber \\
%\end{eqnarray}
%Therefore, according to the induction step assumption:
%\begin{eqnarray}
%p_{uv} &\leq& \frac{r+d(1-r)}{p(1-r)}(g(D) + g(D-1) + \dots + g(i+1))+ \frac{q}{p}g(i+1)+\frac{d+r}{pr} \nonumber \\
%p_{uv} &\leq& g(i) \nonumber \\
%\end{eqnarray}
%\end{proof}
%
%\begin{mylem}
%
%%k ^ i \times p_{xv} + (\frac{f+r}{kpr}) (k+1)^{i-1}$ where, $k=\frac{r}{p(1-r)} + \frac{q+f}{p} + 1$ and $f= 1-p-q-s$.
%\begin{eqnarray}
%g(i) &=& \frac{r+d(1-r)}{p(1-r)}(g(D) + g(D-1) + \dots + g(i+1))+ \frac{q}{p}g(i+1)+\frac{d+r}{pr} \nonumber \\
%g(D) &=& p_{xv} \nonumber \\
%\end{eqnarray}
%%For any vertex $u$, the hitting time $h_{uv} \leq h_{d_G(u,v)}$ where: $h_{0}=0$ and $h_{d} = \frac{((1-p-s)(1-r) + r)h_{xv}+p(1-r)h_{d-1} + \frac{1-r}{r} + 1-r}{1-s+rs}$ if $d_G(u,v) < d_G(x,v)$.
%\end{mylem}



%\begin{mythe}
%Let $G=(V,E)$ be a fair homogeneous graph. Let $v$ be a vertex in $G$ and $p=pc_G(v)$ and $q=rc_G(v)$.
%For any vertex $u$, the hitting time, $h_{uv}$, of a restarting random walk with restarting probability $r$, is $O(?)$.
%\end{mythe}
%
%
%
%\begin{mydef}[Strongly Connected Graph]
%The directed graph $G=(V,E)$ is strongly connected if and only if for any pair of vertices there exists a path
%from $u$ to $v$. An alternative way to define a strongly connected graph is to say that for any pair of vertices $(u,v)$, $pc(u,v) > 0$.
%\end{mydef}
%
%\begin{mylem} \label{lem:mrf}
%Let $G=(V,E)$ be a strongly connected graph. Let $v$ be a vertex in $G$ and $mrf_G(v)$ be the maximum regress factor regarding $v$ in $G$. Let $G'=(V',E')$
%be a homogenous graph and $v'$ be a vertex in $G'$ such that $d_G'(v)=d_G(v)$ and $rf_G(v')=mrf_G(v)$, then for each $u \in v$ we have $h_{d_G(u,v)}$ in $G'$
%is an upper bound for $h_{uv}$ in $G$. 
%\end{mylem}
%
%\begin{proof}
%We will prove this in the next version of the paper.
%\end{proof}
%
%\begin{mythe}
%Let $G=(V,E)$ be a strongly connected graph. Let $v$ be a vertex in $G$ 
%and $mrf_G(v)$ be the maximum regress factor regarding $v$ in $G$.
%For any vertex $u$, the hitting time $h_{uv}=O(mrf_G(v)^{d_G(v)})$ if $mrf_G(v) >1$, $h_{uv}=O(d_G(v)^2)$ if $mrf_G(v)=1$, and
%$h_{uv}=O(d_G(v))$ if $mrf_G(v) < 1$.
%\end{mythe}
%
%\begin{proof}
%The proof is straight forward using Lemma \ref{lem:mrf} and Thereom \ref{thr:main}.
%
%\end{proof}
%
%\section{Discussion}
%Given a homogeneous graph $G=(V,E)$, consider a search problem to find a path from 
%the vertex $u$ to the vertex $v$. It is easy to show that for any vertex $x$ in the graph $rf_G(v) <= |N_G(x)|$ and
%$d_G(x,v) <= d_G(v)$. Therefore, random walks tend to decrease the {\it effective branching factor} ($mrf_G(v)$) while they increase {\it the effective
%depth} of the search ($d_G(x,v)$). Therefore, there is a tradeoff here: if the $d_G(u,v)$ is close to $d_G(v)$, then it is better to use
%random walks for the search but if $u$ is very close to $v$ then it might be better to use more systematic search algorithms like
%breadth fist search.
%
%Most of the graph search algorithms use duplicate detection to decrease the effective branching factor. 
%The question is can we combine the power of random walks and duplicate detection to reduce the effective
%branching factor further. The answer is yes and the algorithm will be presented in the next version of this paper. 
%
%\section{Open Problems}
%\begin{enumerate}
%\item $O(mrf_G(v)^{d_G(v)})$ is a very loose bound for a strongly connected graph. Can we find a tighter bound? 
%\item Can we decrease the effective depth by using bounded random walks?
%\item What will be the expected cost for random walks when checking the goal has a cost and the random walk do not check the goal at every step?
%\item Is it possible to extend the results to the case where the graph is not strongly connected (problems with dead-ends)?
%\end{enumerate}


\vskip 0.2in
\bibliography{thesis}
\bibliographystyle{theapa}

\end{document}






