\documentclass[12pt]{scrartcl}
\usepackage{jeffe,handout,graphicx,hyperref}
\usepackage[utf8]{inputenc}
\usepackage{microtype}
\usepackage{colortbl,arydshln} % color and dashed lines in arrays
\usepackage[charter]{mathdesign}
\usepackage[mathcal]{euscript}
\usepackage[all]{xy}
\usepackage[noend]{algorithmic,algorithm}
\usepackage[usenames,dvipsnames]{color}

\usepackage[T1]{fontenc}
\def\sfdefault{fve}
\def\ttdefault{fvm}

\providecommand{\OO}[1]{\operatorname{O}\left(#1\right)}
\providecommand{\OW}[1]{\Omega\left(#1\right)}
\providecommand{\OT}[1]{\Theta\left(#1\right)}
\providecommand{\good}[1]{\textbf{\color{Green}{#1}}}
\providecommand{\bad}[1]{\textbf{\color{Red}{#1}}}

\title{Horcrux Asides}
\subtitle{Design Justification, How We Got Here, and What Comes Next}
\date{\vspace{-.35in}}
\author{Daniel H. Larkin and Yonatan Naamad}

\begin{document}
\maketitle
\section*{Component Topology}
As in all multiagent system, choosing the underlying topology was one of the first major decisions made in this project. Although the symmetry of peer-to-peer is appealing, the difficulty of ensuring cooperation toward changing goals in decentralized networks made us decide to use a client-server system. Thus, we had to decide between star-like and (nondegenerate) tree-like hierarchies. Primarily for simplicity, we decided to go with the star-like model. This allows for one agent to know at all times what is going on, allowing for accurate gauging of the status of the network when deciding upon allocations. Although a tree-like hierarchy might be required if the number of clients grows large (say, more than a few hundred), the current system appears to be the right choice when the number of clients stays small.

\section*{Utilization Model vs. Swarm Health Model}
The goal of this project is to make clients that assist swarms as much as possible. This relies on establishing a \textit{swarm health metric} to help identify swarms that require the most assistance and then distributing clients appropriately. There are three main barriers to taking on this approach

\begin{enumerate}
	\item Most such metrics we found are applicable to only certain use cases (e.g. \cite{PDKA06}, \cite{SRV11}). It is difficult to anticipate which use cases our system will encounter in practice, so the best approach is likely to start with a generic system and simply allow users of the system to rewrite the rules to fit their needs.
	\item Those metrics that do apply generally require information that is available only to the tracker, and which can only be learned by the clients indirectly (e.g. \cite{MASMTV10}). This added level of indirection made taking advantage of such metrics infeasible for a six-week project.
	\item Testing behavior under complicated metrics is complicated. It is much easier to force a client into edge-case behavior when the edge cases are simple.
\end{enumerate}
Thus, we decided to proceed by using utilization as a proxy for swarm health. In theory, the utilization of a client should be directly proportional to the health of the swarm: extremely healthy swarms require very little of each of its constituents, while those in poor shape need all of the help they can get from each peer. The simplicity of this model also made testing feasible, helping us both locate various bugs and identify parameters/behavior that should be tweaked.

\section*{Additional Thoughts / What's Next}
As hinted above, two possible directions to proceed include transitioning to a tree-like hierarchy and utilizing a more intricate measure of swarm health. Further, it might be worthwhile investigating different ways of transitioning clients into the swarm; perhaps there are better ways of introducing new seeds for a torrent than by first bringing in another leecher. One possible accomodation that also fixes the issue of torrent availability (the case where the only client with a certain torrent goes down) would be to have a central repository from which only clients can download directly. If torrents are sufficiently stable, the main concern in upholding this repository is storage, which is generally not a major concern with modern hard drive prices.

Another, much more intricate, idea involves incorporating a learning algorithm into the server. If the server can identify certain basic usage patterns based on the time of day (such as those that arise from cultural differences across timezones, among others), perhaps it can better prepare clients to handle the change of workload ahead of time. This would potentially allow for a much more seamless transition for swarms with time-dependent behavior, and will help bootstrap these swarms into much healthier states than with the current memoryless process.

\bibliographystyle{acm}
\bibliography{refs}
\end{document}
