\documentclass[12pt]{scrartcl}
\usepackage{jeffe,handout,graphicx,hyperref}
\usepackage[utf8]{inputenc}
\usepackage{microtype}
\usepackage{colortbl,arydshln} % color and dashed lines in arrays
\usepackage[charter]{mathdesign}
\usepackage[mathcal]{euscript}
\usepackage[all]{xy}
\usepackage[noend]{algorithmic,algorithm}
\usepackage[usenames,dvipsnames]{color}

\usepackage[T1]{fontenc}
\def\sfdefault{fve}
\def\ttdefault{fvm}

\providecommand{\OO}[1]{\operatorname{O}\left(#1\right)}
\providecommand{\OW}[1]{\Omega\left(#1\right)}
\providecommand{\OT}[1]{\Theta\left(#1\right)}
\providecommand{\good}[1]{\textbf{\color{Green}{#1}}}
\providecommand{\bad}[1]{\textbf{\color{Red}{#1}}}

\title{Horcrux}
\subtitle{A Content Persistence Manager Geared Toward Use by Providers in a BitTorrent-based Media Distribution Network}

\author{Daniel H. Larkin and Yonatan Naamad}

\begin{document}

\headers{COS 461}{Horcrux}{Larkin \& Naamad}

\maketitle

\section{Overview and Motivation}

Currently, media distribution is handled primarily in a traditional client-server manner.  This stems from a number of reasons, chief among them distribution restriction related to digital rights management.  With the growth of alternative business models and freely distributed content, however, peer-to-peer software such as BitTorrent becomes more attractive.  BitTorrent can efficiently distribute large amounts of data among many resource-diverse users in a manner quite resilient to flash-crowds and transient peers.  There is, unfortunately, still a question of persistent availablility.

Users help the health of a distribution swarm as long as they remain active; however, most peers are somewhat selfish and will not seed a torrent indefinitely.  Unless the popularity of a torrent remains quite high over time, this leaves the burden of keeping content available to the provider.  To persistently support a large number of torrents with a large amount of data requires quite a bit of infrastructure, rivaling that of the traditional server distribution model.  A rather obvious solution is a media distribution network (in the image of content distribution networks for the web) which provide their services to content providers.  A large, shared pool of resources would of course be more able to balance the variable load across the entire distribution network.

We will now give a more precise formulation of the problem we wish to solve.

\subsection{Problem Formulation}

The content persistence manager (CPM) has a fixed pool of server resources with which to distribute media.  For the sake of clarity, suppose there are $N$ servers, each of which has $b_1,\ldots,b_N$ bandwidth slices, $d_1,\ldots,d_N$ units of disk space, and the computational resources to actively run at most $a_1\ldots,a_N$ torrents at any given time.  The CPM is in charge of a media distribution network consisting of $T$ torrents which require $s_1, \ldots, s_T$ units of disk space each.  The CPM makes guarantees to its business clients, that each torrent $i$ will always have a minimum of $m_i$ bandwidth slices allocated to it, burstable up to $M_i$ when there are resources available.

Under the assumptions that $\sum m_i < \sum b_i \ll \sum M_i$, and $\forall i, d_i \ll \sum s_i < \sum d_i$, the task of the CPM is to distribute content as efficiently as possible while honoring the business agreements.

\section{Specification}

\subsection{Gauging Torrent Health}

In order to measure some sense of efficiency, we need a way to gauge torrent health.  The simplest approach is based on utilization, exploiting the idea that bottlenecked torrents would benefit from additional resources, while idle torrents can likely sacrifice resources with no ill effect.

Each client will poll the torrent application at fixed intervals, keeping track of the utilization level of the current bandwidth allocation for each torrent.  If a torrent has been above $x$ utilization for $y$ periods (e.g.\ $>90\%$ for 3 periods), it will request more resources.  Similarly if a torrent is below $w$ utilization for $z$ periods (e.g.\ $<70\%$ for 5 periods), it will notify the control server that it has excess resources which can be reallocated.

This is a very simple client-side calculation, and the client only updates the server after sustained, significant change in swarm behavior.  This means the management overhead is relatively small, however it may not react well to quick changes.

\subsection{CPM Operation \& Control Flow}

As vaguely hinted at above, the functionality of the CPM will be split between a central ``Server'' application and a ``Client'' application running on the actual workhorse servers.  The Client will interact with Deluge (a libtorrent-based client), gathering statistics and sending control messages via remote procedural calls.  This way we will need only our own light-weight applications and will not need to modify or build an actual BitTorrent client.

A loose analogy to think of is that the Server is like a memory manager (in this case managing bandwidth slices).  The Clients request more resources for torrents, which are allocated by the Server, and in turn the Clients will also report freed slices back to the Server so that they may be used to fulfill future requests.

There will additionally be a central ``Database'' which keeps track of the actual .torrent files, as well as storing the guarantees made to business clients.  This will provide a method to bootstrap the Server and provide the Clients with a repository of necessary data.

\subsubsection{Database}

The Database acts as a central repository.  It supports two primary functions.  First, upon receiving a list command from the Server, it will send an ``exposit'' message containing a list of torrent records.  Each record includes a torrent identifier, the size of the torrent data, and the minimum and maximum bandiwidth allocations according to the business agreement.

Further, upon receiving a get command from a Client, the Database will send a give response with the requested .torrent file.

\subsubsection{Server}

The Server maintains two inter-linked dictionaries.  One holds torrent records while the other holds Client records.  Each torrent $i$ is associated with the Client records responsible its distribution.  Each Client record is associated with current allocation details across all the torrents it is actively supporting.

The Server will add clients to the pool when a join message is received.  The Server will then send allocation requests to the Client to get it started in the seeding pool.  If the connection to the client is dropped, it is removed from the pool.

When the Server receives a resource request from a Client, it immediately processes it.  To process the request, the Server must find the requested number of free slices from its pool, and send messages to the appropriate Clients to allocate the specified resources to the torrent.  If a Client already has resources allocated, then the Server must send the updated resource range rather than just a request for additional units.

When the Server receives a resource return (free) from a Client, it subtracts the specified amount from the record of the Client's allocation for the specified torrent, and as a consequence adds the amount to the client's resource pool.

The Server may at some point deem it necessary to reclaim resources in order to satisfy requests.  It may send a special allocation message to a Client in order to stop the Client's activity on a torrent.  The criteria which must be met to begin reclaiming resources and how to decide which resources to reclaim are intentionally left unspecified in document.  Similarly, the method by which to decide how to fulfill a resource request is also unspecified.

\subsubsection{Client}

When the Client begins execution it will send a join request to the Server.  At a minimum, this request will contain information about the total available bandwidth, disk space, and active torrent limit.  It may also contain information about current torrent allocations if the client is resuming from a previous session.  After that it will begin normal operation, waiting for allocation requests from the Server, monitoring torrents, and generating free and request messages as necessary.  Upon receiving an allocation request from the Server, it will store the received $(m,M)$ pair for the torrent and allocate $M$ slices to the specified torrent.  If the torrent is already running on the Client then it will allocate additional slices up to $M$ total.  If $m$ and $M$ are both $0$, the Client stops the torrent.  If the Client needs more disk space in order to start the new torrent, it will delete files from its stopped torrent cache.

The Client will poll Deluge at regular intervals.  If an already-downloaded torrent is using less than $70\%$ of its allocation for several consecutive periods, it will decrease the allocation by $20\%$ (observing the minimum $m$) and generate a free message to the Server.  If a torrent is using at least $90\%$ of its allocation for several consecutive periods, it will generate a request to the Server for an additional number of slices equal to $50\%$ of its current allocation.  The Client will then belay any additional requests several extra periods, to cope with the excess traffic generated by a new Client downloading the torrent.

\subsection{Overlay Topology}

In our limited testing setting, the overlay network will simply consist of a single central Server and many Clients in a star configuration.  In a much larger network, it may ease strain to build a tree hierarchy, with intermediate nodes handling subsets of the Clients as best possible and aggregating results before reporting back to the root Server.  This would likely require \textit{many} Clients before becoming necessary though, so it will not be implemented.

\subsection{Communication Protocols}

All communication is to be carried over TCP to ensure reliability and minimize implementation overhead.  Every message begins with an 8-bit opcode.  Torrent records will be used in multiple communications.  Each record is of the form 
\begin{description}
    \item[$\mathsc{Record}(T,d,m,M)$]: 
	\begin{itemize}
	    \item $T$ is a 160-bit torrent identifier
	    \item $d$ is a 32-bit size for the torrent's data in MB
	    \item $m$, $M$ are 32-bit minimum and maximum bandwidth allocations respectively.
	\end{itemize}
\end{description}

\subsubsection{Client/Server Communications}

\begin{description}
    \item[$\mathsc{Join}(k,b,d,a_{total},a_{current},L)$]: 
	\begin{itemize}
	    \item opcode 0
	    \item 128-bit key $k$
	    \item 32-bit total available bandwidth $b$ in KB/s
	    \item 32-bit disk space $d$ in MB
	    \item 16-bit active torrent limit $a_{max}$
	    \item 16-bit current torrent count $a_{current}$
	    \item (variable length) list $L$ of $a_{current}$ active torrent records
	\end{itemize}

    \item[$\mathsc{Allocate}(m,M,T)$]:
	\begin{itemize}
	    \item opcode 1
	    \item 32-bit minimum allocation $m$
	    \item 32-bit maximum allocation $M$
	    \item 160-bit torrent identifier $T$
	\end{itemize}

    \item[$\mathsc{Request}(b,T)$]:
	\begin{itemize}
	    \item opcode 2
	    \item 32-bit allocation request $b$
	    \item 160-bit torrent identifier $T$
	\end{itemize}
    
    \item[$\mathsc{Free}(b,T)$]:
	\begin{itemize}
	    \item opcode 3
	    \item 32-bit free count $b$
	    \item 160-bit torrent identifier $T$
	\end{itemize}
\end{description}

\subsection{Database Communications}

\begin{description}
    \item[$\mathsc{List}(k)$]:
	\begin{itemize}
	    \item opcode 4
	    \item 128-bit key $k$
	\end{itemize}

    \item[$\mathsc{Exposit}(n, L)$]:
	\begin{itemize}
	    \item opcode 5
	    \item 32-bit torrent count n,
	    \item (variable length) list $L$ of $n$ torrent records
    \end{itemize}
    
    \item[$\mathsc{Get}(T)$]:
	\begin{itemize}
	    \item opcode 6
	    \item 160-bit torrent identifier $T$
	\end{itemize}

    \item[$\mathsc{Give}(l,F)$]:
	\begin{itemize}
	    \item opcode 7
	    \item 32-bit file length $l$ in bytes
	    \item torrent file $F$
	\end{itemize}
\end{description}

\section{Design Decisions}

Due to some ambiguity in the specification, there were some major design decisions made during the implementation process.

\subsection{Database}

The database was implemented in Python using an SQLite backend. Each .torrent file is associated with one or more users, and each user is associated with one or more keys. The database accepts just one request per connection, after which its reply is sent and the connection is closed. This allows for the database to be used in many such projects simultaneously, without worrying about engaging in too many simultaneous connections. While the database does not directly accept remote commands to add new torrents, the included script \texttt{addtorrent.py} presents a simple means of doing so when the database has a single user. Adding additional users/torrents currently requires modifying the SQLite database directly using SQL queries.

\subsection{Server}

The Server was implemented in C.  The dictionaries were implemented as splay trees.  Each torrent record had a linked list of associated Clients, and each Client had a linked list of torrent allocations.  Any priority queues used were implemented as pairing heaps.

\subsubsection{Allocations}

Allocations are first made to Clients which are already servicing the requested torrent.  If these Clients do not have resources to satisfy the request fully, allocations are made to Clients drawn from a priority queue.  The queue is keyed by fractional bandwidth available, so that underutilized Clients are chosen first.

\subsubsection{Resource Reclamation}

When the Server sees that global bandwidth utilization is greater than $90\%$, it attempts to free resources until global levels are under $85\%$.  In an overloaded system, this is likely to trigger more requests and oscillation; however it is entirely possible in a healthy system for some such digital dust to be gumming up the works, in that many small allocations may be lingering that can be cleared out safely.  When it is time to reclaim resources, the Server proceeds in two phases.  In each phase, the Server takes torrents one at a time from a priority queue keyed by oldest average allocation timestamp.  It checks after each torrent to see if enough resources have been freed.  During the first pass, it completely stops Client activity for allocations which are small enough not to violate the minimums.  During the second pass, it shrinks allocations. 

\subsection{Client}

The client was implemented in Python and uses the Deluge API to interact with torrents. Parameters describing available resources, storage location, server/database connection details, and so on are passed in the \texttt{params.conf} file. 

\subsubsection{The Client-Deluge interface}
Although Deluge identifies torrents simply by their \texttt{info\_hash}, the client implements them as their own class to help maintain additional metadata about each torrent. The translation between the two representations is handled in its own module, presenting a simple set of functions that can be utilized by the rest of the program. This module also simplifies the programming process because it converts many formerly asynchronous calls to Deluge into those that act synchronously, allowing for a more linear form of programming with little loss in efficiency.

\subsubsection{Lazy deletion}
Files on the client are deleted only when more storage space is required. This allows for the possibility of a client joining a swarm it has previously left with all files intact, instantly providing an additional seed and preventing the temporary added congestion a new client might provide. Files are deleted in a LRU fashion until enough storage is available to house the new file.

\subsubsection{Status saving}
To help a client restore its place in the network after an unexpected disconnection, the current status of the client is saved after every major change. When reconnecting to the server, clients supply their loaded state as part of the join request and immediately resume their work until the server presents new instructions. This allows for a much more stable handling of temporary disconnections.

\section{Testing}

Unfortunately, we did not have time to test the application to the extent that we would have liked.  We have observed basic functionality in a small, closed system ($\leq 4$ Clients, at most a handful of torrents, and one controlled peer) and are confident that there are only a few bugs left in the system; however we have not been able to test on a large scale yet.  The codebase ended up being significantly larger than we had expected, so by the time we were done coding we only had a few days to debug and another couple to actually test functionality.  If time allowed it, we would have liked to test a few dozen Clients with a few hundred torrents, but we were not able to set up such a significant testbed.  We have chosen various constants in our implementation which could certainly stand to benefit from experimental tuning.  We also ran into some issues on the campus network with our peer not connecting to the Clients.  We are unsure whether this is related to OIT policy, the tracker we used, or some other culprit.

That said, we do have some good news.  The Database handles queries admirably.  The Server makes allocations safely and smoothly (though reclamation has not been thoroughly tested).  The client interacts with Deluge gracefully and generates requests and frees at appropriate times.

\section{Conclusion}

In conclusion, we have made a nice little piece of software which, with further testing, tuning, and development, could actually be used to power a media distribution network.  Unfortunately, we are not very hopeful for any immediate progress on the project, due to other academic concerns.

\end{document}
