\section{Related Work}
\label{sec:related}
Chainsaw~\cite{chainsaw} uses a similar randomly connected network graph as the
backbone of their multicast overlay. However, unlike our system, data flows from
node to node in a \emph{pull-based} fashion. That is, a node requests the data
it wants. While this reduces the amount of redundant messages a node receives,
a push-based model seems more fitting for content that is constantly being
streamed and redundancy can be reduced by using in-mesh pruning. Essentially,
each node in the mesh can tell those sending to it to slow or stop sending if
that sending node's copies are consistently reaching the node late. This would
prune the mesh down to a more tree-like shape, but still allow nodes to
``reactivate'' sending nodes if that path eventually becomes better. Further,
Chainsaw is designed to be a p2p distribution system, whereas \sysname{} would
be used as the backend for a centralized service clients connect to.

Bullet \cite{bullet} also makes use of a mesh to disseminate data and is also a 
p2p system. The main focus of Bullet is to avoid the bottlenecking that multicast trees are susceptible to. To avoid bottlenecking it breaks up the data
that it wants to send and distributes it around the network, making it the
responsibility of receivers to find and gather the information that they wish to
receive. Bullet makes a large leap in utilizing the full bandwidth available
over a network, however live video stream requires a very low latency and it is not
clear that Bullet is capable of providing it. Similarly, SplitStream
\cite{splitstream} works by striping data it's sending.

Another system that combines different overlays to make efficient networks is
mTreebone \cite{mTreebone}. Wang et al. observed that most overlays depend
highly on the reliability and speed of a backbone of nodes. mTreebone builds a
tree of stable nodes, called a \emph{treebone}, and uses a mesh around that.
This gives the system the performance of a tree combined with the reliability of a mesh. We
hypothesize that if we implement certain types of pruning in our systems it
might perform similarly to mTreebone.

An interesting problem with live TV is that TV watchers can change channels
often. Cha et al. \cite{ipNetCha} and Smith \cite{smithIPTV} suggest that a
large amount of bandwidth might need to be used for accommodating users
that are simply changing channels trying to find something to watch. A good frontend like
the one that ESPN3 uses removes much of the guesswork out of
choosing a channel, making this less of a concern.


