%\vspace{-0.1in}
\section{Simulation Results} \label{sec:exp}
We have used the following five important graph generative models for experiments. Several of these have been used in other papers as well for random walk experiments, see for e.g., \cite{GkantsidisMS04}. These graphs together cover a nice spectrum of fast mixing to slow mixing, uniform degrees to very skewed degrees, small diameter to large diameter, etc. thereby testing the algorithm in all the extreme cases as well as nice cases.  

\begin{itemize}
\item Regular Expander: We worked on the most commonly studied random graph model of $G(n,p)$. Here, each of the $n(n-1)/2$ edges occurs independently and randomly with probability $p$. We choose $p$ as $\log n/n$ so that the expected number of edges is roughly $(n\log n)/2$. Further, the expected degree of every vertex is $\log n$. This, with high probability, results in a graph with good expansion and it is regular in expectation i.e., the expected degree is same.  
\item Two-tier topologies with clustering: First we construct four isolated roughly regular expanders, as mentioned above in $G(n,p)$, of the same size - think of these as independent clusters. Then from each cluster we pick a small number of nodes (roughly one-fourth the size of the cluster and connect them using another $G(n,p)$ - think of this as a tier-two cluster. Again we use the same value of $p$ as above.   
\item Power-law graphs: In distributed settings, many important networks are known to have power-laws. We use the well known preferential attachment growth model to construct random power-law graphs. The essential process proceeds by starting with a small clique (of same 5 nodes), and then adding vertices sequentially. Each subsequent vertex added connects with an edge to each of the previous edges with probability depending on their degrees, and independently. Specifically, the new vertex connects with a previous vertex $v$ with probability proportional to $deg(v)^{\alpha}$ where the exponent $\alpha$ is a parameter. 
\item Random Geometric Graph: A random geometric graph is a random undirected graph drawn on a bounded region $[0,1)\times [0,1)$. It is generated by placing $n$ vertices uniformly at random and independently on the region (i.e. both the $x$ and $y$ coordinates are picked uniformly and independently). Then edges are constructed deterministically - two vertices $u$ and $v$ are connected by an edge if and only if the distance between them is at most a parameter threshold $r$. We choose $r$ as $\sqrt{\frac{\log n}{n}}$ so that the degree of each vertices is $O(\log n)$ with high probability.  
\item Grid Graph: Consider a square grid graph ($\sqrt{n}\times \sqrt{n}$) which is a Cartesian product of two path graphs with $\sqrt{n}$ vertices on each. Since a path graph is a median graph, the square grid graph is also a median graph. All grid graphs are bipartite (since they have no odd length cycles).
\end{itemize}

We compute and maintain a preprocessing table containing $\eta_v = \eta deg(v)\log n$ short walks of length $\lambda$ from each vertex $v$. We then check how many walks of length $\ell$ can be done using this table before we hit a node all of whose short walks have been exhausted. The source nodes for each of the $\ell$ length random walk requests are sampled randomly according to the degree distribution. 

We perform experiments on each of the aforementioned synthetically generated graphs, and also by varying different parameters. In particular, we conduct separate experiments for each of (a) varying the length of the walk ($\ell$) as a function of $n$, (b) varying the number of nodes($n$), (c) varying the length of the short walk ($\lambda$) stored by the preprocessing table, and (d) varying the number of short walks stored from each node as a function of the parameter $\eta$. For each of these, we use certain default values when a specific parameter is being varied for a plot, while others are held constant. The default values we use are $n =$ 10,000, $\ell = n$, $\eta = 1$, and $\lambda = \log n$. 

Since we are interested in how many random walks of length $\ell$ can be done in a continuous manner with small round and message complexity, this translates to analyzing the utilization for one specific pre-processing table before {\sc Continuous-Random-Walk} gets stuck and needs to invoke another call to {\sc Pre-Processing}. In particular, to analyze the round complexity, we conduct a set of experiments to evaluate $\kappa$, the number of rows of the {\sc Pre-Processing} table used before the algorithm fails ($\kappa$ plotted on the $y$-axis). As mentioned in the previous section, this gives a bound of $\ell/\kappa$ on the round complexity. In particular, if $\kappa$ is a constant, and large enough, this shows excellent utilization and an asymptotically optimal round complexity. Similarly, for message complexity, we explicitly conduct a second set of plots that calculates the message complexity on the $y$-axis based on $\kappa$ and $D$, for easier visualization.  

We plot graphs by varying each of the parameters $\ell, n, \lambda, \eta$. Each figure contains fives lines, one for each of the above network models: For each of these plot values, we perform ten different runs and then present the average value. 
%In Figure~\ref{fig:plot1b}, we plot all ten values from independent runs as data points, to highlight the fact that our algorithm does not have high variance, and in fact each of these runs perform well; i.e. we show concentration of these data points corresponding to one of these plots here.

\subsection{Short walk utilization factor $\kappa$}
%% USED FRACTION FIGURES

\noindent{\bf Varying $\ell$} [Figure~\ref{fig:plot1c}]: Here $n$ is fixed at 10,000 and $\ell$ is varying as $n^{0.5}, n^{0.6}, ... ,n^{1.2}$; $\lambda$ is $\sqrt{\ell}$ and $\eta$ is $1$.  In this case we see that at least 50\% of the pre-processed short walk rows are used up. This utilization is even better for some of the graph topologies such as $G(n,p)$ and two-tier clustering graph and reaches around 80\%. Therefore, for the entire range of $\ell$ being small to very large, our algorithm performs extremely well: In particular, $\kappa$ is a large constant and therefore the round complexity and message complexity are close to optimal - i.e. within a constant factor of the best possible. \\

\begin{figure}[htbp]
  \centering
%  \includegraphics[width=0.7\linewidth]{Graph_Construction.eps}\\
  \includegraphics[width=0.6\linewidth]{Rplot1c.pdf}\\
  \caption{varying length of the walk $\ell$. $n=10K, \eta = 1, \lambda = \sqrt{\ell}$}
  \label{fig:plot1c}
\end{figure}


%
%\begin{figure}[h]
%  \centering
%  \includegraphics[width=0.6\linewidth]{Rplot1b.pdf}\\
%  \caption{varying length of the walk $\ell$. $n=10K, \eta = 1, \lambda = \sqrt{\ell}$. }
%\label{fig:plot1b}
%\end{figure}
%
%\noindent{\bf Concentration of different runs} [Figure~\ref{fig:plot1b}]: We see that all the data points are clustered together and for each one of them the value of the $y$-axis is a large constant at least $0.4$. Therefore our algorithm not only demonstrates good average case guarantees but also shows very high worst case guarantees over all the runs.

\begin{figure}[htbp]
  \centering
%  \includegraphics[width=0.7\linewidth]{Graph_Construction.eps}\\
  \includegraphics[width=0.6\linewidth]{Rplot2.pdf}\\
  \caption{varying number of nodes n. $\ell =n$, $\eta = 1, \lambda = \sqrt{\ell}$}
  \label{fig:plot2}
\end{figure}

\noindent{\bf Varying $n$} [Figure~\ref{fig:plot2}]: The number of nodes $n$ is varying between 1000 and 10,000. We see that in all of the graphs, the utilization of pre-processed short walks before the algorithm terminates is at least 60\%. We also see that in some of the graphs, the utilization is substantially higher. There even as the graph size scales, our performance remains equally good. \\ 

\begin{figure}[htbp]
\centering
\includegraphics[width = 0.6\linewidth]{Rplot3.pdf}\\
\caption{varying number of short walks $\eta$. $n = 10K, \ell =n, \lambda = \sqrt{\ell}$}
\label{fig:plot3}
\end{figure}


\noindent{\bf Varying $\eta$} [Figure~\ref{fig:plot3}]: We see that the used fraction of rows is increasing with the number of short length walk $\eta$. We see that even for small enough $\eta$ of 1, on all algorithms, the utilization $\kappa$ on the $y$-axis is at least $0.6$, or 60\% of all the short walks get used. This means that for each node $v$, the number of short length walk $d(v)\log n$ suffice, therefore the round and message complexity remain near-optimal as proved previously. 

\begin{figure}[H]
  \centering
  \includegraphics[width=0.6\linewidth]{Rplot4.pdf}\\
  \caption{varying length of short walk $\lambda$. $n = 10K, \ell =n, \eta = 1$}
  \label{fig:plot4}
\end{figure}


\noindent{\bf Varying $\lambda$} [Figure~\ref{fig:plot4}]: The default value of $\lambda$ is $\sqrt{\ell}$. In this plot, we vary $\lambda$ from $0.25\sqrt{\ell}$ to $\sqrt{\ell}$ in doubling steps. We see that the utilization roughly remains the same throughout the plot. Even though the algorithm needs to choose $\lambda$ to optimize for rounds and messages, this plot shows that for any of the values, it performs well. \\

\noindent {\bf Summary of observed round complexity:} To summarize the plots for varying different parameters on the $x$-axis, we see that in all the plots, the value of $\kappa$ on the $y$-axis is a constant and usually at least $0.5$. Since $\kappa$ is $1$ for optimal or perfect utilization of the table, we see that for all parameter values, the utilization is only a small constant factor (around $2$) away from the optimal. Therefore, the round complexity, as proven in the previous section, increases only marginally. 


\subsection{Message complexity plots}

\noindent{\bf Varying $\ell$} [Figure~\ref{fig:Mplot1}]: In this plot, we vary $\ell$ and note the message complexity of the algorithm {\sc Continuous-Random-Walk}, per random walk {\sc Single-Random-Walk} request within it. For any walk $\ell$ the optimal number of messages would be $\ell$ itself. Notice that in our plot also, all the lines (that is for all the graphs) are very close to the $x = y$ line, which is the optimal line. Therefore, the efficiency of {\sc Continuous-Random-Walk} amortized is almost the best possible.\\

\begin{figure}[htbp]
  \centering
%  \includegraphics[width=0.7\linewidth]{Graph_Construction.eps}\\
  \includegraphics[width=0.6\linewidth]{Mplot1.pdf}\\
  \caption{varying length of the walk $\ell$. $n=10K, \eta = 1, \lambda = \sqrt{\ell}$}
  \label{fig:Mplot1}
\end{figure}



\begin{figure}[htbp]
  \centering
%  \includegraphics[width=0.7\linewidth]{Graph_Construction.eps}\\
  \includegraphics[width=0.6\linewidth]{Mplot2.pdf}\\
  \caption{varying number of nodes n. $\ell =$n, $\eta = 1,\lambda = \sqrt{\ell}$}
  \label{fig:Mplot2}
\end{figure}

\noindent{\bf Varying $n$} [Figure~\ref{fig:Mplot2}]: In this plot as well, since we use the default value of $\ell = n$, the best possibility is for the message complexity to be $n$, which corresponds to the $x = y$ line. Notice that again for all the graphs, the lines for message complexity, through the entire range, is almost the best possible; this is because we get straight lines with the slope being very close to $x = y$ line. \\

\begin{figure}[htbp]
\centering
\includegraphics[width = 0.6\linewidth]{Mplot3.pdf}
\caption{varying number of short walks $\eta$. $n = 10K, \ell =n, \lambda = \sqrt{\ell}$}
\label{fig:Mplot3}
\end{figure}

\noindent{\bf Varying $\eta$} [Figure~\ref{fig:Mplot3}]: As $\eta$ is increased between $0.25$ and $4$, we see that the message complexity reduces rapidly. It is expected that as the number of pre-processing rows are increased, the efficiency would improve and therefore message complexity also improves. This plots sharp decline, however, also suggests that just a small enough $\eta$ is also sufficient to drastically bring down the message complexity close to optimal, regardless of what the graph topology is. \\

\begin{figure}[htbp]
  \centering
%  \includegraphics[width=0.7\linewidth]{Graph_Construction.eps}\\
  \includegraphics[width=0.6\linewidth]{Mplot4.pdf}\\
  \caption{varying length of short walk $\lambda$. $n = 10K, \ell =n, \eta = 1$}
  \label{fig:Mplot4}
\end{figure}

\noindent{\bf Varying $\lambda$} [Figure~\ref{fig:Mplot4}]: This plot is very similar to that of varying $\eta$, here we see again that as $\lambda$ is increased, the message complexity goes down rapidly. Recall that here we are comparing different $\lambda$ values for a fixed $\ell$ value of $n$. Our algorithm {\sc Continuous-Random-Walk} uses $\lambda = \sqrt{\ell}$ but we tried this plot with even smaller values of $\lambda$. As expected, the message complexity is high initially, however, as $\lambda$ is increased close to $\sqrt{\ell}$, the message complexity rapidly reduces, and improves the algorithm performance substantially.\\

\noindent {\bf Summary of observed message complexity:} In the naive approach, while each random walk requires $O(\ell)$ messages, the round complexity is increased significantly. At the other extreme, each random walk in~\cite{drw-jacm} was round-efficient but required $\Omega(m)$ messages! Our algorithm of {\sc Continuous-Random-Walk} achieves the best of both worlds by guaranteeing best-possible message and round complexity for graph topologies. The experiments suggest that for a wide range of parameters, the algorithm is able to answer each {\sc Single-Random-Walk} request in a continuous manner with very few messages or rounds. These results corroborate our theoretical guarantees and highlight the practicality of our technique. 

