\section{Design}
\label{sec:design}
\sysname{} is designed to be the backend to a multi-channel streaming service
such as ESPN3 or Internet radio. The content streams should be \emph{loss
tolerant}, meaning that the applications using the data streams can still
function even if some packets are lost. This includes applications like
streaming video or audio since drops would be just be a missed frame or audio
sample which is typically tolerable for clients. We designed \sysname{} to be:
(a) maintainable - the service operator should have to do little configuration;
(b) scaleable - adding new servers should be easy and the number of servers
per content stream should scale with the number of viewers; (c)
fault-tolerant - the impact of data loss should be minimized; and (d) performant
- servers should be able to handle many clients concurrently while also
participating in various meshes.

\sysname{} runs on a cluster of servers that accept clients to stream content.
These servers organize into multicast groups, where each multicast
group handles one content stream. We call a single multicast group a
``mini-mesh.'' All of our multicast groups are referred to as meshes because
they are connected graphs where each node points to a small number of other
nodes. In our prototype, each server in a mesh sends messages to $5$ other
servers (see section~\ref{sec:eval} for how we reached that number). These
meshes are formed in a decentralized, distributed fashion by sharing and
querying information over a larger mesh called the ``organization mesh.'' This
mesh contains every server in the cluster and is used when new servers are
added or when a server is looking to potentially join an existing mini-mesh. By
forming these mini-meshes and using \emph{clustering} (see section
\ref{sec:clustering}) we ensure utilization of servers is high, bandwidth is
not wasted, and resources per content stream can scale with demand.

\subsection{Organization Mesh}
\label{sec:orgmesh}
The organization mesh in \sysname{} is used to add servers to the cluster and to
query about the state of the servers in the cluster. Every server in the system
will be a member of the organization mesh.
\subsubsection{Joining \sysname{}}
\label{sec:joining}
To join \sysname{}, the only configuration the server needs is a list
of addresses potentially in the system already. On start up, the server will
send a \reqmsg{} message to each server on the list, one at a time, until it
receives a \heremsg{} message. If it doesn't get a response from a server,
it will time out after a short period and continue through the list. If it makes
it through the whole list without receiving any \heremsg{} messages, it assumes
that it is the first server to start up and waits to respond to the next server
that comes online. The \reqmsg{} message contains a predetermined mesh ID for
the organization mesh (in our prototype we just used the SHA-1 hash of an empty string).

When a server receives a \reqmsg{}, it forwards the \reqmsg{} to the servers
that are its neighbors in the mesh, including the new server's address in the
payload to let those servers know how to reach it. Then, if the server is currently
responsible for less than 5 other servers, it will add the new server to its
list, otherwise it will replace one of its own with probability proportional to
the size of mesh (\ie{} $5 / N$, where $N$ is the size of the cluster including
the new server). Finally, the server sends a \heremsg{} reply to the new server that
contains a list of servers that it was responsible for before the new server
and whether it added the new server. The new server will receive these \heremsg{}
messages from all the servers in the cluster (since the \reqmsg{} is
continuously forwarded), and from the replies it can calculate (a) how many other servers
added it, and (b) the in-degree of all the other servers. With (a), if a new
server was not added it can just send a new \reqmsg{} with a higher sequence
number until it's added. With (b), the new server can choose the $5$ servers
with the lowest in-degree in order to keep the graph connected and improve
those servers' redundancy.

\subsubsection{Status Queries}
\label{sec:status}
The purpose of the organization mesh is to help clients and servers find
servers that are part of some mini-mesh. A client can initially
connect to any server in the cluster, even if the server is not currently
serving the client's desired stream. When
this happens, a server will need to get additional information from the other servers to improve
resource utilization. This process is explained more in
section~\ref{sec:clustering}.

\subsection{Mini-meshes}
\label{sec:minimesh}
Mini-meshes are the multicast overlays that connect each server that wishes to
share a single content stream. This means that there is a 1-to-1 correlation
between mini-meshes and content providers. Mini-meshes are created and expanded
much like the organization mesh, but they use a number of servers proportional
to the number of subscribers to the mesh. The servers in a mini-mesh are a
subset of the servers in the organization mesh, but the edges between them
could be different from those in the organization mesh. Each mini-mesh contains
a single publishing server where the content is generated. The mini-mesh has an
ID just like the organization mesh, which is the SHA-1 hash of the name of
the content stream (\eg{}, ``Princeton basketball game''). This ID is in
every PUB message the publisher generates. These PUB messages contain the
actual data of the stream, along with sequence numbers so that servers in the
mini-mesh only forward each message that they receive once. On average each
server will receive every message $5$ times. A server will forward and publish
the first message and discard the remaining $4$.

Joining a mini-mesh works much like joining the organization mesh does. When a
server determines that it should join a mini-mesh (see next section), it sends
out a \reqmsg{} containing the mini-mesh ID to the organization mesh. The \reqmsg{} gets forwarded
through the organization mesh, but this time only servers that are members of
the mini-mesh respond with \heremsg{} messages. The new server then constructs a
new set of neighbors for that mini-mesh.  Since each mini-mesh (and the
organization mesh) are defined by unique IDs, a server knows which set of
neighbors they forward any message to based on the mesh ID in the message, with
the default being the organization mesh.

A server joins a mini-mesh in two ways. The first is if it's the initial
publishing server. This is when the mini-mesh is created, and the server that
becomes the publishing server is picked using the clustering technique described
in the next section. It does not need to inform the other servers that a
mini-mesh has been made since any server that wants to find out will query the
organization mesh and eventually reach this server. The other way a server joins
a mini-mesh is if it gets a client trying to subscribe to that content stream,
but after querying for load information during clustering, finds there is no
room for new clients on any of the current servers in the mini-mesh. It then
sends out a \reqmsg{} with the mini-mesh ID and joins the mini-mesh as described
the for organization mesh. Once it's a part of the mini-mesh, it can then accept
clients looking for that content stream since it will be receiving all published
data.

\subsection{Clustering}
\label{sec:clustering}
Since we allow clients to connect to any server initially, we need a way to keep
mini-meshes compact so as not to waste intra-cluster bandwidth, and reach our
goal of scaling with content demand. If we did not compact the mini-meshes,
eventually they would all grow to include every server in the cluster which
wastes intra-server bandwidth. Instead, we use a technique we call clustering to
(a) put new content streams on the least utilized servers and (b) put new
clients on servers already in the desired mini-mesh, or the least utilized
server if all the servers in the mini-mesh are full. For our prototype, we
measure load by assigning each server a constant amount of client ``slots'' and
the load measure was the number of free slots (a publisher counts as one slot).
More sophisticated ways of measuring load, such as outgoing bandwidth usage,
could also be used. Cherkasova and Staley \cite{UDC} discuss issues such as
streaming at different bitrates and streaming directly from memory instead of
from disk. Both of these issues apply to live streaming data. Using ideas such
as these could allow for more intelligent clustering of clients.

When a client or a publisher connects to a server, the server queries the
appropriate mesh for load information. If it's a publisher, the organization
mesh is queried with a SURVEY message which contains the best load seen so far.
If a server has a better load measure, it will respond directly the server that
started the survey. The SURVEY message is forwarded on to a server's neighbors,
with an updated payload of the best load for this query. The server who started
this query will take the best server after a timeout period (in our prototype,
$50$msec) and inform the publisher which server it should connect to, or that
it's accepted if the initial server is the best one.

The process works similarly for subscribing clients, except that any server
already in the desired mini-mesh is preferred to one not in the mini-mesh even
if it has a worse load value. This is to keep mini-meshes as compact as possible
so that the mini-mesh's size is proportional to its popularity, server
utilization is maximized, and intra-cluster bandwidth is minimized. If no
suitable server in the mini-mesh can be found, the initial server redirects the
client to the least loaded server or accepts the client if it is the least
loaded server, after which it initiates the process of joining the mini-mesh.

% Figure \ref{outGraph} shows an example layout of a \sysname{} with $5$ servers.
% For image readability we gave each node $3$ neighbors even though the default is
% $5$. The organization mesh is represented in black. Because there are more
% servers than there are conections per server, this is not a compeltely connected
% graph, but every node can be reached from any other in $2$ jumps. The triangles
% with $A$ and $B$ represent content producers. Each is connected to one server.
% The server they are connected to is part of the minimesh for that producer. The
% image shows this by coloring the server red. The number inside the server
% represents the minimesh it is part of and how many clients are connected to it.
% Each server in this example has $20$ slots. There are a total of $40$ clients
% subscribed to each producer. Because the producers take up a slot, this is too
% many clients to keep the servers completely separate. One server is a member of
% both minimeshes and has a subscriber from each minimesh attached to it.
% % 
%  \begin{figure*}[t]
%  \begin{center}
%      \includegraphics[width=.50\linewidth] {figs/oOut.png}
%      \vspace{-0.08in}
%      \label{figure:outGraph}
%      \caption{Example graph where each server has 3 neighbors}
%  \end{center}
%  \end{figure*}