\section{Evaluation}
\label{sec:eval}
To evaluate \sysname{} we built a prototype that implements most of the features
we have outlined, as well as a client library that can be used to build
applications that use the prototype. The library supports both connecting to
\sysname{} as a publisher of streaming content or as a subscriber to a stream. 
We have implemented the clustering portion of a design so the
client library will automatically reconnect to a better server if it's told to
do so. Using this library we built a few applications to evaluate our system,
including one to measure the throughput of the system and one that transfers
files to subscribers. In addition, we built a model to simulate
clusters of larger sizes to verify properties of our system hold at scale.

The prototype is $1500$ lines of Java that makes heavy use of threads and Java's
concurrency library. Not only are client communications, intra-cluster data
communications, and intra-cluster organization communications given separate
ports, but sending and receiving for each of those communications are on
separate threads. This is so that sending to either other servers or clients
will not affect the reading of incoming material as much. The concurrent data
structures are lock-free making them very efficient. Prototyping the
system allowed us to discover early flaws with our probabilistic models and we
were able to improve the the organization protocols to reduce problem cases like
a server being disconnected from the cluster.

While the number server-to-server connections from each server $S$ is easily
configurable, $5$ seems to be a sweet-spot.  As the $S$ increases we trade
higher reliability for increased redundant network use. An increase in $S$ also
means a decrease in the average number of steps between two servers. The first
concern when choosing an $S$ is making sure that all new servers get at least
one incoming edge. We present the probability of a node not being added in
figure \ref{fig:SLimit}. This probability is calculated by finding the limit of
$(\frac{N-S}{N})^N$ as $N$ approaches infinity. Even though a node detects if it
is not added and rejoins, it does so at the cost of sending a message throughout
the entire network, so it is best to minimize the number of times a node is
not added.

We constructed a model of our system to simulate its behavior on a large number
of nodes. Figure \ref{fig:ModelRuns} presents the statistics we collected from
multiple runs of this model. \% Connect represents the number of tests that
resulted in networks with no nodes left out. While it is concerning that $.04\%$
of networks leave out a node the solution is simple: Servers can occasionally
request a response from everyone who sends them messages. If some timeout passes
before a server gets a response it can rejoin the network. We also present max
distance and average distance from an arbitrary node to any other node in the
network. These numbers fall in line with $\log_5(n)$ where $n$ is the number of
nodes in a graph. This means that within our mesh we must contain a well
balanced tree. 

\begin{figure}
\begin{center}
    \begin{tabular}[t]{ | r | c | c | c |}
    \hline
    Nodes & \% Connect & Max Hops & Avg Hops \\ \hline
    10 & 99.96 & 2.01 & 1.30 \\ \hline
    25 & 99.99 & 3.01 & 1.93 \\ \hline
    50 & 99.96 & 3.74 & 2.39 \\ \hline
    100 & 99.97 & 4.04 & 2.84 \\ \hline
    1000 & 99.96 & 6.0 & 4.30 \\ \hline
    \end{tabular}
\end{center}
\caption{Model statistics for various cluster sizes. Hops measure the distance
between cluster servers.}
\label{fig:ModelRuns}
\end{figure}
% 
% \begin{figure}
% \begin{center}
%     \begin{tabular}[t]{ | r | c |}
%     \hline
%     Nodes & \% Connected Meshs \\ \hline
%     10 & 99.96 \\ \hline
%     25 & 99.99 \\ \hline
%     50 & 99.96 \\ \hline
%     100 & 99.97 \\ \hline
%     1000 & 99.96 \\ \hline   
%     \end{tabular}
% 	\label{figure:goodmesh}
% \end{center}
% \caption{Number of }
% \end{figure}

%I DONT KNOW HOW TO DO FIGURES :(
%
\begin{figure}[t]
\begin{center}
	\includegraphics[width=1.00\linewidth] {figs/eval-typical.pdf}
	\vspace{-0.08in}
\end{center}
\label{figure:throughput}
\caption{Upload throughput of a single server using prototype}
\end{figure}

We evaluated the prototype to see if a single server could serve multiple
clients without significant loss. With limited resources, we were able to
connect 54 clients to one server and push 5-6Mbps of data out to each, with a
typical run shown in figure \ref{figure:throughput}. 5-6Mbps is sufficient to
stream video at around standard definition. In most test runs we saw little to
no packet loss (4,1, and 0 packets in the three runs measured, with each run
having over 10,500 packets). This result gives us confidence that this system
is feasible since intra-cluster communication should be a fraction of that (5
neighbors vs 50+), and because of the parallelizable nature of the system they
should achieve high efficiency. Using a low-level language, better buffer
usage, and other optimizations should yield even better performance.

\begin{figure}
\begin{center}
    \begin{tabular}{ | r | c |}
    \hline
    S & Limit \\ \hline
    2 & .1353 \\ \hline
    3 & .0498 \\ \hline
   	4 & .0183 \\ \hline
    5 & .0067 \\ \hline
    6 & .0025 \\ \hline
    \end{tabular}
\end{center}
\label{fig:SLimit}
\caption{Limit of unadded nodes as number of nodes approaches infinity}
\end{figure}
