%\documentclass[twoside,11pt]{article}
%\usepackage{jair, theapa, rawfonts, amsthm, amsmath, mathtools}
\documentclass{article}
\usepackage{aaai}
\usepackage{times}
\usepackage{amsthm} 
\usepackage{amsmath} 
\usepackage{mathtools}
\usepackage{graphicx}

\renewcommand{\qedsymbol}{}
\newtheorem{mydef}{Definition}
\newtheorem{mythe}{Theorem}
\newtheorem{mylem}{Lemma}

\usepackage[usenames]{color} % Only used in comment commands
\definecolor{Blue}{rgb}{0,0.16,0.90}
\definecolor{Red}{rgb}{0.90,0.16,0}
\definecolor{DarkBlue}{rgb}{0,0.08,0.45}
\definecolor{ChangedColor}{rgb}{0.9,0.08,0}
\definecolor{CommentColor}{rgb}{0.2,0.8,0.2}
\definecolor{ToDoColor}{rgb}{0.1,0.2,1}

% *** Use this definition of the command to show the comments ***
\newcommand{\changed}[0]{\textbf{\color{ChangedColor} Changed: }}
\newcommand{\hootan}[1]{\textbf{\color{CommentColor} /* #1  (hootan)*/}}
\newcommand{\martin}[1]{\textbf{\color{CommentColor} /* #1  (martin)*/}}
\newcommand{\commentout}[1]{}

\begin{document}

\title{A Theoretical Model for Studying Random Walks Planning}
\author{Submission number: 1007}
%\author{Hootan Nakhost\\
%University of Alberta\\
%Edmonton, Canada\\
%nakhost@ualberta.ca
%\And 
%Martin M\"uller\\
%University of Alberta\\
%Edmonton, Canada\\
%mmueller@ualberta.ca
%}

\maketitle

\begin{abstract}
Random walks are a relatively new component used in several state of the art satisficing planners.
Empirical results have been mixed: while the approach clearly outperforms more systematic search methods
such as weighted A* on many planning domains, it fails in many others. So far, the explanations for these
empirical results have been somewhat ad hoc.
This paper proposes a formal framework for comparing the performance of random walk and systematic search methods.
Homogenous and weakly homogenous graphs are
proposed as graph classes that 
represent characteristics of the state space of prototypical planning domains, 
while still allowing a theoretical analysis of the performance of both random walk 
and systematic search algorithms.
This gives well-founded insights into the relative strength and weaknesses of the approaches.
The close relation of the models to some well-known planning domains is shown.

One main result is that in contrast to systematic search, where the branching factor plays a decisive role,
the performance of random walk methods is determined to a large degree by the Regress Factor, 
the ratio between the probabilities of progressing towards and regressing away from the goal. 
By considering both branching and regress factors of a state space, 
it is possible to explain the relative performance 
of random walk and systematic search methods. 
\end{abstract}

\section{Introduction}
Random walks, which are paths through a search space that follow
successive randomized state transitions, 
are a main building block of prominent 
search algorithms such as Stochastic Local Search techniques 
for SAT \cite{selman:etal:aaai-92,wei:etal:jsat-08} and 
Monte Carlo Tree Search in game playing and puzzle solving
\cite{dave,Finnsson,DBLP:conf/ijcai/Cazenave09}. 

Inspired by these methods, several recent satisficing planners also
utilize random walk techniques. Identidem \cite{identidem} performs 
a hill climbing search that uses random walks to escape from plateaus or saddle points.
All visited states are evaluated using a heuristic function. The random walks are biased 
towards the states with lower heuristic values. Arvand \cite{Arvand} 
takes a more radical approach:
it relies exclusively on a set of
random walks to determine the next state in its local search.
For efficiency, it only evaluates  the endpoints of those random walks. 
Arvand also learns to bias its random walks towards more promising search regions over time. 
Roamer \cite{roamer} enhances its best-first search (BFS) with random walks,
aiming to escape from \textit{search plateaus} where the heuristic is uninformative. 

While the success of random walk methods in other research areas serves as
a good general motivation, such work did not provide
an explanation for why these planners perform well.  
Three points have been noted as
main advantages of random walks for planning:
\begin{itemize}
\item Random walks are more effective than systematic search
approaches for escaping from regions where
heuristics provide no guidance \cite{identidem,Arvand,roamer}.
\item Increased sampling of the search space by random walks adds a beneficial
\textit{exploration} component to balance the \textit{exploitation} of the heuristic in planners \cite{Arvand}.  
\item  Combined with proper \textit{restarting} mechanisms,
random walks can avoid most of the time
wasted by systematic search in the dead-ends. Through restarts, random walks can rapidly back out of 
unpromising search regions \cite{identidem}. 
\end{itemize}

While these explanations are intuitively appealing, there is little direct
empirical or theoretical evidence supporting them. 
Typically, random walk planners are evaluated by measuring their coverage, 
runtime, or plan quality. While such results demonstrate that random walks can perform well
in practice, they provide no detailed insights into
\textit{why} they work. 
For example, there have been no measurements which directly show that random walks really do
escape more quickly from plateaus than other, more systematic approaches. 

\subsection{A First Motivating Example}
The main goal of the current paper is a careful theoretical investigation of the first 
point above - the question of how different search algorithms used in planning are able to 
escape from plateaus. As an example, consider the following well-known plateau for
the FF heuristic, $h_{FF}$, discussed in \cite{Helmert04}. 
Recall that $h_{FF}$ estimates the goal distance by
solving a relaxed planning problem in which all the negative effects of actions are ignored. 
Consider a transportation domain in which trucks are used to move packages between $n$ locations
connected in a single chain $c_1,\cdots,c_n$.
The goal is to move one package from $c_n$ to $c_1$.
%\hootan{Do we need a picture?}
%\hootan{Do we need to explain why this is a plateau?}
Figure \ref{fig:transport} shows the results of a basic scaling experiment on this domain with $n=10$ locations,
varying the number of trucks $T$ from 1 to 20. All trucks start at $c_2$. 
The results compare basic
Monte Carlo Random Walks (MRW) from Arvand-2011 and basic Greedy Best First Search (GBFS) from LAMA-2011. 
Figure \ref{fig:transport} shows how the runtime of BFS grows quickly 
with the number of trucks $T$
until it exceeds the memory limit of 64 GB. 
This is expected since the effective branching factor grows with $T$. However,
the increasing branching factor has only little effect on MRW: the runtime grows only linearly in $T$. 

\subsection{Choice of Basic Search Algorithms - why No Enhancements?}

All the examples in this paper use state of the art implementations of
basic, unenhanced search methods.
GBFS implemented in LAMA-2011 is used as a representative
of the systematic search methods, while the MRW implementation of Arvand-2011 
represents random walk methods.
Both programs use $h_{FF}$ for their evaluation.
Enhancements such as preferred operators in LAMA and Arvand, multi-heuristic search in LAMA,
or Monte Carlo Helpful Actions (MHA) in Arvand are switched off. 

The reasons are:
1. This paper studies theoretical models that can explain the substantially different behavior of random walk and
systematic search methods. Simple search methods allow to align the theoretical results closely with
practical experiments.
2. Enhancements may benefit both methods in different ways, or be only applicable to one method, so may
confuse the picture. Studying theoretical models that can handle such enhancements remains as future work.
3. The focus of this paper is to understand the behavior of these two search paradigms in regions
where there is a lack of guiding information, such as plateaus. Therefore, in some examples even
a blind heuristic is used. While enhancements can certainly have a great influence on search parameters
such as branching and regress factors or search depth, the authors believe that the fundamental differences
in search behavior will remain.
%This type of study in no way limits the applicability of the results
%because no matter how good the enhancements and the heuristic functions are designed
%there will be still search regions where none of these can provide any guidance and the power of the
%search to find a way out is the thing that matters.  

\subsection{Homogenous and weakly homogenous graphs}
Two classes of graphs which model the search space of planning problems are proposed, in order to study
the behavior of search algorithms: 
\textit{Homogenous} and \textit{weakly homogenous} graphs. 
The key property used to analyze random walks on these graphs is their \textit{regress factor} $\mathit{rf}$: 
the ratio of the probability of the random walk \textit{progressing} towards a goal and the 
probability of \textit{regressing} away from 
a goal. In homogenous graphs, almost all nodes share the same $\mathit{rf}$. 
Theorem \ref{thr:homo} shows that $\mathit{rf}$ plays almost the same
role as the branching factor $b$ in systematic search: runtime grows exponentially with a base of $\mathit{rf}$ 
as long as 
$\mathit{rf} > 1$. In practice, large parts of the state 
space of tasks in Transport and Grid are close to homogenous graphs.

In the \textit{weakly homogenous graph} model, $\mathit{rf}$ is no longer constant over the whole graph, 
but it depends only on the distance to a goal. Theorem  \ref{thr:weak} extends the analysis to this graph class.
%\martin{You said THE goal, I say A goal. Does it matter? Is the definition general for multiple goal states? If not need to
%put in the restriction somewhere.}
%\martin{what are the results for this case?}
%Theorem \ref{thr:weak} shows that the hitting time in this graph is
%a function of the multiplication of all $\mathit{rf}$, defined for each goal distance in the graph.
\commentout{
For both models examples that relate the models to standard
planning benchmarks are given, and possible
ways to improve the basic random walks  are discussed.} 
The state space of Gripper is close to a weakly homogenous graph.

\subsection{Restarting Random Walks (RRW)}

Besides $\mathit{rf}$, the other key variable affecting the average runtime of basic random walks
is the largest goal distance $D$ in the whole graph, which appears in the exponent. 
For large $D$, the \textit{restarting random walks} (RRW) model
can offer a substantial performance advantage. At each search step,
RRW restart from a fixed initial state $s$ with probability $r$. 
Theorem \ref{thr:restarting} proves that the expected runtime of RRW
depends only on the goal distance of $s$, not on $D$. 

\begin{figure}
%\centering
\includegraphics[width=0.47\textwidth, height=0.23\textheight ]{RESOURCES/transport.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:transport} Average runtime of BFS and MRW varying the number of trucks (x-axis) in Transport domain. Missing data means the planner exceeds the memory limit.}
\vspace{-0.2cm}
\end{figure}


\section{Background and Notation}
Notation follows standard references such as \cite{Norris}.
In all definitions, let $G=(V,E)$ be a directed graph.
\begin{mydef}[Distance $d_G$]
For $u,v \in V$, $d_G(u,v)$ is the length of a shortest path from $u$ to $v$. 
The distance $d_G(v)$ of a \textit{single} vertex $v$ in $G$ is the length
of a longest shortest path from a node in $G$ to $v$: $d_G(v)=max_{x \in V} d_G(x, v)$
\end{mydef}
\begin{mydef}[Neighborhood]
The \textit{neighborhood} of $u \in V$ is the set of all vertices
in distance 1 of $u$:  $N_G(u)=\{v | v \in V \wedge d_G(u,v) = 1\}$.
\end{mydef}
%\martin{why neighborhood, not successors? would neighborhood not usually include (v,u) as well?}
\begin{mydef}[Random Walk]
A random walk on $G$ is a Markov chain $M_G$ with states $V$
and transition probability between $u, v \in V$
of $p_{uv} = \frac{1}{|N_G(u)|}$ if $(u,v) \in E$,
and $p_{uv} = 0$ if $(u,v) \notin E$.
\end{mydef}

\begin{mydef}[hitting time]
The hitting time $h_{uv}$ is the expected number of steps in a random walk that starts from $u$ 
and reaches $v$ for the first time.
\end{mydef}

\begin{mydef}[Regress Factor]
Let $u,v \in V$, and $X: V \rightarrow V$ a random variable
that given a vertex $k$ selects a vertex from $N_G(k)$ uniformly at random.

\noindent The progress chance of $u$ regarding $v$, $pc(u,v)$, is 
the probability of getting closer to $v$ after one random step at $u$: $pr(d_G(X_u, v) = d_G(u, v)-1)$. 

\noindent The regress chance of $u$ regarding $v$, $rc(u,v)$, is 
the probability of getting further away from $v$ after one random step at $u$: $pr(d_G(X_u, v) = d_G(u, v)+1)$. 

\noindent The stalling chance of $u$ regarding $v$, $sc(u,v)$, is 
the probability of staying at the same distance of $v$ after one random step at $u$: $pr(d_G(X_u, v) = d_G(u, v))$. 

\noindent The regress factor of $u$ regarding $v$ is $\textit{rf}(u,v)=\frac{rc(u,v)}{pc(u,v)}$.
\end{mydef}

\subsection{Plateaus, Exit Points and Exit Time}
A \textit{plateau} $P \subseteq V$ is a connected subset of states which have the same heuristic value $h_P$.
A state $s$ is an \textit{exit point} of $P$ if $s \in N_G(p)$ for some $p \in P$,
with heuristic value $h(s) < h_G$. The \textit{exit time} of a 
random walk on a plateau $P$ is the expected number
of steps in the random walk until it reaches an exit point for the first time. 
\commentout{
In the corresponding graph $G(V, E)$ of a plateau $P$, there is a one to one relation between $V$ and the states
on the plateau, and for $u,v \in V$, $(u,v) \in E$ iff there is an action from the state represented by $u$
to the state represented by $v$. Now if the exit point(s) 
in a plateau is defined as the goal(s) of a RW in the corresponding
graph, then the hitting time in the graph equals the exit time in the plateau.}

\section{Fair Homogeneous Graphs}
A fair homogeneous graph $G$ is the simplest state space model introduced here. 
\textit{Homogenuity} means that both progress and regress
chances are constant for all  nodes in $G$. \textit{Fairness} means that 
an action can change the goal distance by at most one.

\begin{mydef}[Homogeneous Graph]

Given $v \in V$,
$G$ is $v$-homogeneous iff there exist two real functions $pc_G(x)$ and $rc_G(x)$ 
with domain $V$ and range $[0, 1]$ such that for any vertex $u \in V$ the following
two conditions hold:
\begin{enumerate}
%\item $sc(u,v) = sc_G(v)$.
\item If $u \neq v$ then $pc(u,v) = pc_G(v)$.
\item If $d(u,v) < d_G(v)$ then $rc(u,v) = rc_G(v)$. 
\end{enumerate}
G is homogeneous iff it is $v$-homogeneous for all $v \in V$.  
The functions $pc_G(x)$ and $rc_G(x)$ are respectively called
the progress and the regress chance of $G$ regarding $x$. 
The regress factor of $G$ regarding $x$ is defined by $\textit{rf}_G(x)=rc_G(x)/pc_G(x)$.
\end{mydef}

\begin{mydef}[Fair Graph]
$G$ is fair for $v \in V$ iff for all $u \in V$, $pc(u,v)+rc(u,v)+sc(u,v) = 1$. $G$
is fair if it is fair for all $v \in V$. 
\end{mydef}

\begin{mythe} \label{thr:homo}
Let $G=(V,E)$ be fair and homogeneous. For $u,v \in V$, let 
$p=pc_G(v)$, $q=rc_G(v)$, $D = d_G(v)$, and $x=d_G(u,v)$.
Then the hitting time $h_{uv}$ is: 
\[
h_{uv}=
  \begin{dcases}
   \frac{q}{(p-q)^2}\left((\frac{q}{p})^{D}-(\frac{q}{p})^{D - x}\right) + \frac{x}{p-q} &\text{if }  q \neq p \\
   -\frac{x^2}{2p}+(\frac{2D+1}{2p})x &  \text{if } q =p
  \end{dcases}
\]

\end{mythe}
\begin{proof}
Omitted. This is a special case of Theorem \ref{thr:weak}.
\end{proof}

Note that $h_{uv}$ increases monotonically with $q$ and decreases with $p$.
%\martin{is it true? even near p=q?}
From Theorem \ref{thr:homo} it follows that:

\begin{align}
\label{eq:homos}
h_{uv} \in
  \begin{dcases}
  \Theta\left(\frac{q}{(p-q)^2}(\frac{q}{p})^D\right) &\text{if }  q > p \\
  \Theta(x) &  \text{if } q <p  \\
  \Theta(x \times D) & \text{if } q =p 
  \end{dcases}
\end{align}

The hitting time depends on the regress factor $\textit{rf}=q/p$; as long as
 $q$ and $p$ are fixed, changing other 
 structural parameters such as the branching factor $b$ has no 
 effect on the hitting time. 
For $q > p$, it does not matter how close the start state is to the goal. The hitting time
depends on $D$, the largest goal distance in the graph. 

\subsection{Analysis of the Transport Example}
Theorem \ref{thr:homo} helps explain the experimental results in Figure \ref{fig:transport}.  
In this example
the plateau consists of all the states encountered before loading the package
onto one of the trucks. Once the package is loaded, $h_{FF}$ can guide the search
directly towards the goal. Therefore, the exit points of the plateau are the states in which 
the package is loaded onto a truck. 
%\martin{Why are truck locations not part of hFF???}
%\hootan{I don't get this quesiton..}
Let $m<n$ be the location of a most advanced truck in the chain.
For all non-exit states of the search space, $q \leq p$ holds:
 there is always at least one action which progresses towards a closest exit point -
move a truck from $c_m$ to $c_{m+1}$.
There is at most one action that regresses, in case $m>1$ and there is only
a single truck at $c_m$ which moves to $c_{m-1}$.
Setting $q = p$ for all states yields an upper
bound on the hitting time, since increasing the ratio can only increase the hitting time. 
See \cite{TR} for details.
By Theorem \ref{thr:homo}, $ -\frac{x^2}{2p}+(\frac{2D+1}{2p})x$ is an upper bound
for the hitting time. 
%Now if we increase the number of trucks both $q$ and $p$ values will decrease but the regress factors
%do not change. 
If the number of trucks is multiplied by a factor $M$,
 then $p$ will be divided by at most $M$, therefore the upper bound is
also multiplied by $M$. 
The worst case runtime bound grows only linearly with the number of trucks. In contrast, 
systematic search methods suffer greatly from increasing the number of vehicles,
since this increases
the effective branching factor $b$, and the runtime of methods such as
greedy best first search, A* and IDA*, typically grows as $b^d$. 

This effect can be observed in all planning problems where increasing the number of 
 objects of a specific type
does not change the regress factor. Examples are the vehicles
 in transportation domains such as Rovers, Logistics, Transport, and Zeno Travel, 
which do not appear in the goal propositions.
%or agents who have similar functionalities and do not appear in the goal, e.g., satellites in the satellite domain. 
Another example are ``decoy'' objects which can not be used to reach the goal. Actions that affect only
 the state of such
objects do not change the goal distance, so increasing the number of such objects has no effect on $\textit{rf}$
 but can blow up $b$. Of course, techniques such as  
plan space planning, backward chaining planning, or preferred operators can often prune such actions.
%\martin{ mention helpful actions? Relaxed planning graph? FF heuristic?} 
%\hootan{usually actions that change the state of these objects do not change the heuristic values.}

Theorem \ref{thr:homo} suggests that if $q > p$ and the current state is close to 
an exit point in the plateau, then using
systematic search is more effective. In this case random 
walks are extremely inefficient since they move away from the exit with high probability. 
This problem can be fixed to some degree by using restarting random walks.

\section{Fair Weakly Homogenous Graphs}
%\begin{itemize}
%\item Prove the theorem \ref{thr:weak}.
%\item Use Theorem \ref{thr:weak} to compute the hitting time for a gripper example.
%\item Give experimental results for random walks and Breadth first search in the gripper example.
%\end{itemize}
\textit{Fair weakly homogenous graphs} generalize fair homogenous graphs by having 
$pc$ and $rc$ depend on the goal distance instead of being constant. 

\begin{mydef}[weakly homogenous graph]
For $v \in V$, $G$ is weakly $v$-homogeneous iff 
there exist two real functions 
$pc_G(x, d)$ and $rc_G(x, d)$,
mapping the domain $V\times \{0, 1, \dots, d_G(v)\}$ to the range $[0, 1]$,
such that for any two vertices $u,x \in V$ 
with $d_G(u,v)=d_G(x,v)$ the following two conditions hold:
\begin{enumerate}
\item If $d_G(u,v) \neq 0$, then\\ $pc_G(u,v) = pc_G(x,v) = pc_G(v, d_G(u,v))$.
\item $rc_G(u,v) = rc_G(x,v) = rc_G(v, d_G(u,v))$.
\end{enumerate}
G is weakly homogeneous iff it is weakly $v$-homogeneous for all $v \in V$.  
$pc_G(x, d)$ and $rc_G(x,d)$ are called
 progress chance and regress chance of $G$ regarding $x$. 
 The regress factor of $G$ regarding $x$ is defined by $\textit{rf}_G(x,d)=rc_G(x,d)/pc_G(x,d)$.
\end{mydef}

The next lemma shows that nodes with the same
goal distance share the same hitting time. 

\begin{mylem} \label{lem:distance}
Let $G=(V,E)$ be fair weakly homogenous.
Then, for all $v,x,x' \in V$  with $d_G(x,v)=d_G(x',v)=d$, $h_{xv}=h_{x'v}$.
\end{mylem}

\begin{proof}
Let $D=d_G(v)$, $p_i = pc_G(v,i)$ and $q_i=rc_G(v,i)$.
For any $v' \in V$, let $u_{v'v}$ be the expected number of steps in a random
walk that starts from $v'$ until it progresses (by one) towards $v$ for the first time. 
By induction on $d$, we show that $u_{xv}=u_{x'v}=u_d$, where 
\begin{align}
\label{eq:unit}
u_d=
  \begin{dcases}
  \frac{q_d}{p_d}u_{d+1}+\frac{1}{p_d} &\text{if }  d < D  \\
  \frac{1}{p_d} &  \text{if } d = D \\
  \end{dcases}
\end{align}

For $d=D$, before reaching the goal distance $D-1$ 
the random walk is always at a state with goal distance $D$.
The probability of progressing towards
$v$ is $p_D$, so on average $\frac{1}{p_D}$ steps are needed
to progress, and $u_{xv}=u_{x'v}= \frac{1}{p_D}=u_d$.

Suppose the lemma holds for $d+1$. To show that it also holds for $d$,
let $X$, $Y$, and $Z$ be three random variables that respectively 
measure the number of times the random walk visits a state with
goal distance $d$ before reaching the goal distance $d-1$, the number of steps between two consecutive such visits, 
and the total length of the random walk. Then, $Z=X \times Y + 1$. 

Call the last step at distance $d$, before progressing to $d-1$, a \textit{successful $d$-visit}, and all previous
visits, which do not immediately after reach $d-1$, \textit{unsuccessful  $d$-visits}.
After each unsuccessful  $d$-visit, the random walk
transitions to distance $d+1$ with probability $\frac{q_d}{1-p_d}$ and stalls at distance $d$
with probability $\frac{1-q_d-p_d}{1-p_d}$.
In the former case, the random walk performs an expected $u_{d+1}$ steps 
until it returns to distance $d$. Therefore, 
$E[Y] =(\frac{q_d}{1-p_d}) (u_{d+1} + 1)+ (\frac{1-q_d-p_d}{1-p_d}) 1$. 
The probability of transitioning from $d$ to $d-1$ is $p_d$, so $E[X]=\frac{1}{p_d}$.
Since $X$ and $Y$ are independent, $E[Z]=E[X]\times E[Y] + 1$, so:

\begin{eqnarray}
u_{xv} &=& E[Z] = 1+\left(\frac{q_d(u_{d-1} + 1)}{1-p_d} + \frac{1-q_d-p_d}{1-p_d} \right) (\frac{1}{p_d})      \nonumber \\
&=& \frac{q_du_{d-1}}{p_d} +\frac{1}{p_d} = u_d   \nonumber 
\end{eqnarray}

The expected hitting time $h_{xv}$ is the sum of the expected number
of steps in each unit progression from $x$ towards $v$: 
\begin{eqnarray} 
h_{xv}= \sum_{1 \leq i \leq d} u_i = h_{x'v} \label{eq:chain}
\end{eqnarray}

\end{proof}


\begin{mythe} \label{thr:weak}
Let $G=(V,E)$ be a fair weakly homogeneous graph, and $v \in V$.
Let $p_i = pc_G(v,i)$, $q_i=rc_G(v,i)$ and $d_G(v) = D$.
Then for all $x \in V$, 
%\begin{equation} 
%\begin{split}
%h_{xv} = &  \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{d_G(v)-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_{d_G(v)}} +  \right. \\  
%& \left. \sum^{d_G(v)-1}_{j=d}\left(\frac{1}{p_j}\prod^d_{i=j+1}\frac{q_i}{p_i}\right)\right) \nonumber
%\end{split}
%\end{equation}
\begin{eqnarray} 
h_{xv} = \sum_{1 \leq d \leq d_G(x,v)} \left(\left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right)\right) \nonumber
\end{eqnarray}
\end{mythe}

\begin{proof}
According to Lemma \ref{lem:distance}: 
\begin{eqnarray} 
h_{xv} &=& \sum_{1 \leq i \leq d} u_i  \nonumber \\
u_d &=& \frac{q_d}{p_d}u_{d+1}+\frac{1}{p_d} \quad (d < D) \nonumber \\ 
u_D &=& \frac{1}{p_D}  \nonumber 
\end{eqnarray}
It suffices to show that for $d < D$
\begin{eqnarray} 
u_d=\left(\prod^{D-1}_{i=d} \frac{q_i}{p_i}\right)\frac{1}{p_D} + \sum^{D-1}_{j=d}\left(\frac{1}{p_j}\prod^{j-1}_{i=d}\frac{q_i}{p_i}\right) \label{eq:distance} \label{eq:weak}
\end{eqnarray}
The (easy) proof is by induction from $d$ to $d-1$, with base case $d=D-1$. 
\end{proof}

Theorem \ref{thr:homo} is the special case of Theorem \ref{thr:weak} where 
$q_i = q$ and $p_i = p$. 
In analogy with homogenous graphs, 
the largest goal distance $D$ and the regress factors $q_i/p_i$ are the main determining factors
for the expected runtime of random walks in weakly homogenous graphs. 

\subsection{Example domain: One-handed Gripper}

\begin{table}[tp]%
\centering
\begin{tabular}[t]{ cclcccc }
Cat.   &  room   &  hand & $pc$ & $rc$ & $rf$ & $b$\\
\vspace{0.1cm}
$1$  &  $A$   &  full       & $\frac{1}{2}$ & $\frac{1}{2}$ & 1 & 1 \\
\vspace{0.1cm}
$2$  &   $A$   & empty & $\frac{|A|}{|A| + 1}$ & $\frac{1}{|A| + 1}$ & $\frac{1}{|A|}$ & $|A|$\\
\vspace{0.1cm}
$3$  &  $B$   &  full       & $\frac{1}{2}$ & $\frac{1}{2}$ & 1& 1\\
\vspace{0.1cm}
$4$  &  $B$   &  empty & $\frac{1}{|B| + 1}$ & $\frac{|B|}{|B| + 1}$ & $|B|$ & $|B|$\\
\end{tabular}
\caption{Structural properties of One-handed Gripper. Room specifies the
location of the robot. $|A|$ and $|B|$ denote the number of
balls in A and B.}
%\martin{also list rf and b, they are used in the text}
\label{table:gripper}
\end{table}

Consider a one-handed gripper
domain, where a robot must move $n$ balls from room A to room B, 
by using the actions of
picking up a ball, dropping its single ball, or moving to the other room. 
The states of the search space fall into four categories shown in Table \ref{table:gripper}.
The search space is fair weakly homogenous: any two states with
the same goal distance have the same distribution of
balls in the rooms and also belong to the same category. The graph is fair
since no action can change the goal distance by more than one. Therefore,
Theorem \ref{thr:weak} can be used to compute the expected hitting time. 

\begin{figure}
%\centering
\includegraphics[width=0.47\textwidth, height=0.23\textheight ]{RESOURCES/gripper.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:gripper} The Average number of generated states varying the number of balls (x-axis) in Gripper domain.}
\vspace{-0.2cm}
\end{figure}

Figure \ref{fig:gripper} plots the predictions of Theorem \ref{thr:weak} together with
the results of a scaling experiment, varying $n$ for both random walks and greedy best first search. 
To simulate the behavior of both algorithms in plateaus with a lack
of heuristic guidance, a blind heuristic is used,
which returns 1 for any non-goal state and 0 for goals. 
Search stops as soon as it finds a state with a heuristic value lower than
that of the initial state. Because of the blind heuristic, the only such state is the goal state. 
%\martin{I guess there can be more than one goal state in theory, but in practice there is only one since the robot will be
% in room B.}
The prediction matches the experimental results extremely well. 
Random walks outperform Greedy best first search.
The reason 
%\martin{this sounds vague. why do we not have a solid analysis?}
is that while in most of the states
the regress factor is almost equals the effective branching factor of GBFS in
almost $\frac{1}{4}$ of all the states, all the states in category 2 of Table \ref{table:gripper}, 
the regress factor is significantly
smaller than the effective branching factor. 
%\martin{I still do not understand this text. What is 1/4 of what? What is the reason?}
%Therefore, a better performance of random walks compared to systematic 
%search is expected. 

\section{Biased Action Selection for Random Walks}
Regress factors can be changed by biasing the action selection in the random walk. 
One very natural bias is to use a two-level scheme, where first an action type
is selected uniformly randomly, then the chosen action is grounded in a second step.
In the gripper domain, there are three types of actions - pick up, drop and move. In the 
case of pick up, one of the balls in the same room is selected uniformly at random in the second step. 

When using this biased selection, the search space is fair homogenous with $q=p=\frac{1}{2}$. 
The experimental results and theoretical prediction for
such walks are included in Figure \ref{fig:gripper}. The hitting time 
grows only linearly with $n$ here. It is interesting that this natural
way of biasing random walks
is able to exploit the symmetry inherent in the gripper domain. 

\section{Restarting Random Walks}
The restarting random walk model used here is a random walk which
\textit{restarts} from a fixed initial
state $s$ with probability $r$ at each step,
and uniformly randomly chooses among neighbor states with probability $1-r$. 

\begin{mydef}[Restarting Random Walk]
Let $G=(V,E)$ be a graph, $s \in V$ the initial state, and $ r \in [0, 1]$. 
A restarting random walk $RRW(G, s, r)$
is a Markov chain $M_G$ with states $V$ and transition probability
$p_{uv}$ for $u, v \in V$ of:  
\begin{align*}
 p_{uv}=  
  \begin{dcases}
   \frac{1-r}{|N_G(u)|}&  \text{if } (u,v) \in E, v \neq s \\
   r + \frac{1-r}{|N_G(u)|}&  \text{if } (u,v) \in E, v = s \\ 
   0 &  \text{if } (u,v) \notin E, v \neq s \\ 
   r &  \text{if } (u,v) \notin E, v = s \\ 
  \end{dcases} 
\end{align*}
\end{mydef}

Similar to Lemma \ref{lem:distance},  the hitting time for the RRW model
depends only on the goal distance.
The proof is also similar, except that for goal distances $x < d_G(s, v)$,
Equation \ref{eq:unit} 
is replaced by  $u_x=\frac{1}{p} + \frac{q}{p}u_{x+1}+\frac{r}{p(1-r)} \sum^{d}_{i = x+1}u_i$.
The third term accounts for the expected cost of progressing from $s$ to
goal distance $x$ at each restart. Theorem \ref{thr:restarting} is the main result:

\begin{mythe}\label{thr:restarting}
Let $G=(V,E)$ be fair homogeneous, $v,s \in V$, $r \in [0,1]$, $p=pc_G(v)$, and $q=rc_G(v)$,
$RRW(G, s, r)$ a restarting random walk, and $M = \frac{q}{p}+ \frac{r}{p(1-r)}$. Then
\begin{equation}
\label{eq:restarting}
h_{sv} = h_{d(s,v)} \in O\left(\left(M+1\right)^{d(s,v)+1}\left(\frac{q+r}{pr}\right)\right)
\end{equation}
\end{mythe}

\begin{proof}
Let $d=d_G(x,v)$. By the Markov property of random walks \cite{Norris}, \\
\begin{align}
h_0 &= 0 \label{eq:example1} \\
h_x &= (1-r)\left(qh_{x+1}+ph_{x-1} + (1-p-q)h_x + 1\right) + rh_d \label{eq:example2}
\end{align}
Let $u_x = h_x - h_{x-1}$. Then for $x < d$,
\begin{align}
u_x &= \frac{1}{p} + \frac{q}{p}u_{x+1} + \frac{r}{p(1-r)} \sum^{d}_{i = x+1}u_i \notag\\
 &< \frac{1}{p} + \left(\frac{q}{p}+ \frac{r}{p(1-r)}\right) \sum^{d}_{i = x+1}u_i \label{eq:restart}
\end{align}
%Let $M = \left(\frac{q}{p}+ \frac{r}{p(1-r)}\right)$. 
Induction on $x$, with $x=d-1$ as the base proves that
 $\frac{1}{p} + M \sum^{d}_{i = x+1}u_i =
\left((M+1)^{d-x} - (M+1)^{d-x-1}\right)u_d + \frac{(M+1)^{d-x-1}}{p}$.
From the hitting time formula $h_x = \sum^x_{i=1}u_i$ and elementary properties
of geometric series it follows that

%\begin{align*}
% \frac{1}{p} + M \sum^{d}_{i = x+1}u_i =& \left((M+1)^{d-x} - (M+1)^{d-x-1}\right)u_d\\ 
% +& \frac{(M+1)^{d-x-1}}{p}   \\
%\end{align*}

%For $x=d$, it is trivial. Assume it holds for $x+1$ then,\\
%\begin{eqnarray}
% \frac{1}{p} + M \sum^{d}_{i = x+1}u_i &=&  \frac{1}{p} + M \sum^{d}_{i = x+1} (u_d(M+1)^{d-i} - u_d(M+1)^{d-i-1}  +  \frac{(M+1)^{d-i-1}}{p}). \nonumber \\
% &=&  \frac{1}{p} + M u_d \sum^{d}_{i = x+1} (M+1)^{d-i} - M u_d \sum^{d}_{i = x+1} (M+1)^{d-i-1} +  \frac{M}{p}\sum^{d}_{i = x+1}(M+1)^{d-i-1}. \nonumber \\
% &=&  \frac{1}{p} + M u_d \frac{(M+1)^{d-x} -1}{M} -  M u_d \frac{(M+1)^{d-x-1} -1}{M} +  \frac{M}{p}\frac{(M+1)^{d-x-1}-1}{M}. \nonumber \\
% &=&  \left((M+1)^{d-x} - (M+1)^{d-x-1}\right) u_d+ \frac{(M+1)^{d-x-1}}{p}. \nonumber \\
%\end{eqnarray}

%Therefore, \\
%\begin{align*}
%u_x <  \left((M+1)^{d-x} - (M+1)^{d-x-1}\right) u_d+ \frac{(M+1)^{d-x-1}}{p} \\
%\end{align*}

%Additionally, \\

\begin{align*}
%h_x &= \sum^x_{i=1}u_i \nonumber \\
 %&< \sum^x_{i=1}\left(\left((M+1)^{d-i} - (M+1)^{d-i-1}\right) u_d+ \frac{(M+1)^{d-i-1}}{p}\right) \\
h_x< \frac{\left((M+1)^{d+1} - (M+1)^{d}\right)}{M} u_d+ \frac{(M+1)^{d}-1}{Mp}
\end{align*}

%\martin{I do not understand the following paragraph.}
Furthermore, $u_d = h_d - h_{d-1} = \frac{q}{p} u_{d-1} + \frac{1}{p}$. 
Since a random walk, after moving to the goal distance $d+1$, on average restarts after $\frac{1}{r}$ steps, and thus returns to 
the initial goal distance $d$, $u_{d+1} \leq \frac{1}{r}$. 
Therefore, $u_d < \frac{q+r}{pr},$ and 

\begin{align*}
h_x < \left(\frac{(M+1)^{d+1} - (M+1)^{d}\ }{M}\right) \left(\frac{q+r}{pr}\right)\\ + \frac{(M+1)^{d}-1}{Mp} 
\in \, O\left(\left(M+1\right)^{d(s,v)+1}\left(\frac{q+r}{pr}\right)\right)
\end{align*}\qedhere
\end{proof}

%\martin{I do not understand the following paragraph.}
Comparing the exponential terms $(M + 1)^{d(s,v)+1}$ and $(\frac{q}{p})^D$ in the hitting time of RW and RRW,
Equations \ref{eq:restarting} and \ref{eq:homos}, 
the base of the exponential term for RRW 
equals the regress factor, the base of the exponential term for RW, plus $\frac{r}{p(1-r)} + 1$, which might
translate to larger hitting times for RRW.
However, by choosing a small enough restarting probability $r$, $\frac{r}{p(1-r)} + 1$ can be reduced.
% \martin{how close???pretty?meaning?} to the regress factor. 
The main advantage of RRW over simple random walks
is that the exponent of the exponential term is reduced from $D$ to $d(s,v)$,
which can make a huge difference especially when $d(s,v)$ is small. 
However, RRW is predicted to be a bit weaker when $v$ is ``far away'', so that $d(s,v)$
is close to $D$. 

\subsection{A Grid Example}
\begin{figure}
%\centering
\includegraphics[width=0.47\textwidth, height=0.26\textheight ]{RESOURCES/grid.jpg}
\vspace{-0.25cm}
  \caption{\label{fig:grid} The Average number of generated states varying 
  the goal distance of the starting state (x-axis) and the restart rate in the Grid domain.}
\vspace{-0.2cm}
\end{figure}

Figure \ref{fig:grid} shows the results of RRW with restart rate $r \in \{ 0, 0.1, 0.01, 0.001 \}$
in a variant of the Grid domain. This domain features an $n\times n$ grid with a robot 
that needs to first pick up a key at location $(n,n)$, then unlock a door at
$(0,0)$. 
To make the problem nontrivial, with regress factors larger than 1,
the robot can only move in at most three directions in most of the grid: left, up or down. 
Only in the top row, the robot is allowed to move right (but not up).

In this domain, all states before the robot picks up the key share the same $h_{FF}$ value. 
Figure \ref{fig:grid} shows the average number of states generated until the subgoal of
picking up the key is reached, with the robot starting from different goal
distances plotted on the x-axis. Since the regress factors
are not uniform in this domain, Theorem \ref{thr:restarting} does not apply directly. 
Still, comparing the results of RRW for different $r>0$ with
simple random walks where $r=0$, the experiment confirms the high-level predictions of
Theorem \ref{thr:restarting}: 
\begin{itemize}
\item RRW generates slightly more states, at most two times more, 
than simple random walks when the initial goal distance is large, $d \geq 14$, and $r$ is small enough.
\item RRW is much more efficient when $d$ is small; for example it generates three orders of magnitude 
fewer states for $d=2$, $r=0.01$.    
\end{itemize}

%\section{Variance Reduction}
%All theorems in this paper characterize the average-case runtime 
%of random walks. One possible concern is that the variance
%might be so high that compared to more systematic approaches,
%which may have zero variance, random walks become
%an inferior choice.  A standard variance reduction technique
%based on a \textit{Markov Inequality} \cite{Motwani}, can help.  
%
%The Markov Inequality states that if $Y$ is a random variable with non-negative values, 
%then for all $k>0$, $pr(Y>= k*E[k]) <= 1/k$. To apply this to random walk planning,
%let $h_x$ be the hitting time for a random walk with initial state $x$. 
%By the inequality, 
%if  $n$ independent random walks of maximum length $k \times h_x$ are run, 
%their combined failure probability (the probability of never reaching a goal) will be $(\frac{1}{k})^n$. 
%\martin{exactly? bounded by? what do you state?}
%Therefore, to get an exponentially smaller failure probability
%\martin{exp. smaller than what? What does exp. smaller mean anyway? really arbitrary small? Be precise here.}
% it is enough to increase the number of random walks by only a polynomial factor.
%Thus, if in a given search space the average runtime of random walks 
%is exponentially smaller than the average runtime of a systematic approach, 
%then the variance of the runtime of the random walks
%can be decreased to an arbitrary small value and still be exponentially faster than the systematic approach. 
%\martin{Way too much hand-waving here. be precise.}

\section{Extension to Bounds for Other Graphs}
While many planning problems cannot be exactly modeled as 
fair homogenous or weakly homogenous graphs, these models
can still be used to obtain upper bounds on the hitting time in any fair graph $G$. 
Consider a corresponding weakly homogenous graph $G'$ 
with progress and regress chances at each goal distance $d$ set to 
the minimum progress and maximum regress chances over all nodes
at goal distance $d$ in $G$.
Then the hitting times for $G'$ %\martin{you had G here but I think it is G'} 
will be an upper bound for the hitting times in
$G$ \cite{TR}.
In $G'$, regressing away from the goal is at least as probable as in $G$,
and progressing towards the goal is at most as probable.  

%\section{Regress Factor vs. Effective Branching Factor}
%So far it is shown that whenever the regress factor is smaller than
%the effective branching factor random walks tend to do better than standard 
%systematic approaches. 
%\martin{Where is this shown???}
%If the progress chance of a node is not zero,
%then the maximum possible value for the regress factor 
%of a node equals the actual branching factor of the node \martin{b-1?}. 
%This happens when only one action
%moves towards a goal and all the other \martin{b-1?} actions regress away from the goal(s).
%
%\martin{this whole section reads extremely negatively. at least say that you could fix RW by adding memory}
%However, the regress factor can be much larger than the effective branching
%factor of a systematic algorithm; most of the standard search algorithms
%use memory to prune the search space by using duplicate detection mechanisms.
%If scaling certain parameters of a domain increase the size of the search space
%by only a polynomial factor, e.g., the map size in a single agent path finding problem, 
%then there is a potential that the effective branching factor do not scale up as much as
%the regress factor scales, and this can lead to an inferior performance of random walks 
%compared to the systematic search.  

%\begin{itemize}
%\item Use Markov Inequality to show that the variance of the hitting time 
%can be reduced by an exponential factor with increasing the number of random walks by only a linear factor.
%\end{itemize}
%\section{Exponential Scaling versus Polynomial Scaling}
%\begin{itemize}
%\item Use a grid/sokoban example to show:
%\begin{itemize}
%\item Random walks scale worse than systematic search when we scale up parameters that lead
%to polynomial increase in the size of the search space, e.g., the grid size. 
%\item Random walks scale better than systematic search when we scale up parameters that lead
%to exponential increase in the size of the search space without increasing the maximum regress factor, 
%e.g., the number of agents
%in the grid. 
%\end{itemize}
%\item Use a blocks world example to show that random walks are not
%superior when the exponential scale up of the problem also increases the regress factor.
%\end{itemize}
%
%\section{Other things that can/should be added to the paper}
%\begin{itemize}
%\item An experimental example in a graph other than those
%that are supported by the theories, e.g., pipesworld and blocks world. 
%\item The discussion concerning the rate at which the states should
%be evaluated using the heuristic function.
%\item Mention the possibility of using preferred operators with both systematic search
%and random walks (MHA) to find the exit-points faster. Theoretical Analysis of these methods 
%is a future work. 
%\end{itemize}

\section{Related Work}
Random walks have been extensively studied in many different scientific fields such as 
physics, finance, and computer networking \cite{rw_network,rw_finance,rw_supply}.
Discrete and continuous random walks are well studied
\cite{Norris,Aldous,Yin,pardoux}. 
The standard approach to find the hitting time in a graph is to write the linear equations
for the hitting times as in Equations \ref{eq:example1} and \ref{eq:example2}, and solve 
them by linear algebra. In contrast, the techniques used 
in this paper mainly build on  methods for finding the hitting time of 
simple chains such as birth--death, and gambler chains \cite{Norris}. The advantage of
these methods is that solutions can be expressed easily as functions
of chain features. 

Studying the properties of random walks on finite graphs has a very long history
surveyed in \cite{lovasz}. 
One of the most relevant results is that the hitting time of a random walk in an 
undirected graph with $n$
nodes is $O(n^3)$ \cite{Brightwell}. 
%This is the best known upper-bound for the hitting time in 
%a general graph. 
However, this result does not explain the strong performance of random walks 
in planning search spaces which grow exponentially with the number of objects. 
Despite the rich existing literature on random walks, the application to 
the analysis of random walk planning seems to be novel. 

\section{Future Work}
One of the promising directions for future work is to generalize Theorem \ref{thr:restarting} to 
non-fair graphs, 
where an action can increase the goal distance by more than one. Such graphs can be used
to model planning problems with dead ends. 
%A restarting random walk on average wastes $\frac{1}{r}$ steps in a dead-end. 
%\martin{really? why? what if it always falls into a dead end like in Sokoban? clarify}
%Depending on the size and structure of the dead end region, 
%this could 
%compare favorably to a systematic search which might exhaustively explore such a dead-end. 
Another research direction is to analyze the behavior of enhanced Monte Carlo 
random walk algorithms such as MHA or MDA \cite{Arvand}, 
which utilize valuable information such as preferred operators and the 
density of dead ends to
bias action selection in random walks. Finally, it seems promising to further study methods that combine random walks with
using memory and systematic search.

\newpage
\bibliography{aaai2012}
\bibliographystyle{aaai}

\end{document}






