\section{System Description}
\label{sec:system_description}
In this section we explore the theoretical aspects of our strategies, and their respective implementations. First we give an overview of the BitTorrent protocol and the client we have chosen to modify in section \ref{sec:bittorrent}. We then continue on in section \ref{sec:sliding} to explain the \emph{Sliding Window} strategy and what changes that entails to the existing BitTorrent protocol, by first examining the theoretical change and then describe the modification of \emph{Jbittorrent} to support that. In section \ref{sec:segments} we do the same for the \emph{Segments} strategy.

Finally we briefly discuss alternative strategies and why we didn't choose them in section \ref{sec:alternative}.
\begin{comment}
A thorough description of your system---with special focus on the P2P aspects.  How does your system solve its intended problem; how does it do what it does at run-time?  If there is a difference between your design and your implementation, you should be very specific in describing those differences.
\end{comment}

\subsection{BitTorrent} \label{sec:bittorrent}
In this section we first describe the BitTorrent protocol, and then give a description of the client we have chosen to modify to be able to test our strategies.

\subsubsection{BitTorrent protocol} \label{sec:bittorrent_protocol}
To better understand the proposed strategies in this report you need to understand the BitTorrent protocol as it is used in many different applications. The normal use of BitTorrent was briefly introduced in section \ref{sec:related_work} and in this section we will elaborate on that. \cite{Cohen:2003wd} gives a good overview of the BitTorrent technical framework, and describes the protocol clients must adhere to if they want to participate in the distribution of data in the best way. The protocol is optimized for sharing data fast and efficient, but are rubbish for use in other types of systems.

The main selling point for BitTorrent is the fact that instead of sending an entire file over one connection, you split the file into chunks or pieces, and are thereby able to distribute the downloading to a large number of peers instead of relying solely on a single server or peer.
\\

The BitTorrent protocol is divided into three major parts. First we have the tracker that, as its name suggests, keeps track of who is currently downloading or seeding the file you want to get. When you want to download a piece of data using BitTorrent, your client connects to the tracker, or trackers, responsible for the given file. The tracker then returns an amount of randomly selected peers, normally set to 50, and the client starts downloading from these peers using the second major part of the protocol, namely the piece selection strategy. As touched upon in section \ref{sec:related_work} \cite{shah2007peer} among others, have suggested the need for changes in the distribution of peers returned by the tracker, but we will not delve further into this possibility in this report.

The piece selection strategy is the most important part in this report. In the BitTorrent protocol, a peer will try to download the least replicated piece from its list of peers. This strategy will always ensure a high rate of availability and lessen the problem of peers leaving the network and thereby removing the opportunity to get pieces that only they had. There are two exceptions to this strategy. One is when a peer joins the network for the first time and starts downloading. It will then select a random piece to get started, and then commence with the rarest first. The second exception is the so-called end game where peers will request all the remaining pieces they need from all their peers.

The final major part of the protocol is the choking algorithms. To encourage people to share as much as possible, BitTorrent employs a strategy called choking, where all but a set number of peers are restricted from downloading from the user. An extension of this strategy is also employed in the sense that the client will try to make optimistic unchokes to see whether other peers have a better download rate than the ones currently unchoked. As a rule you always try to unchoke the peers that provide the highest download rate.
\\

In this report we will only look at changes to the piece selection strategy. For more research about the possibilities gained by changing the tracker or the choking algorithm we refer the reader to other literature on these subjects.

\subsubsection{Jbittorrent} \label{sec:jbittorrent}
To test our strategies we needed to implement them in a client. Instead of implementing the entire BitTorrent protocol ourselves we chose an existing client and modified that one instead. The client we chose is called Jbittorrent. This client was implemented in Java, and supports most of the things we needed to test our strategies. The Jbittorrent implementation includes a class that makes you able to set up your own tracker, which makes it ideal for testing. The test setup is explained in section \ref{sec:test_environments}.
\\

We make it clear that we only use Jbittorrent because it makes it easy to set up a small test environment. It does lack some things that make up the core pieces of the BitTorrent protocol. Jbittorrent doesn't use the rarest first piece finding strategy as a standard and we had to implement that ourselves. Whenever a BitTorrent client decides upon a piece to download it usually split that piece into smaller chunks, usually 56 kb a piece, and get all those small chunks before it starts downloading another piece. This makes for a nice exploitation of all the different peers since it will be able to get a chunk from each peer and thereby speed up the download process.

Jbittorrent does things a little differently. Instead of splitting each piece up it tries to download the entire piece from each peer that wants to share it, and when one download finishes it throws the downloaded parts from everyone else out and starts on a new piece.

One could argue that choosing Jbittorent makes for an unreliable test, but the fact of the matter is that is suffices to test the different strategies in our report. Even though it doesn't entirely live up to the BitTorrent protocol, it only does things in a slower way, and therefore if the strategies work in Jbittorrent it should work even better in a client that supports the full BitTorrent protocol.

\subsection{Strategies} \label{sec:strategies}

\subsubsection{Sliding window} \label{sec:sliding}
The first strategy we decided to implement and test, is a strategy presented in \cite{shah2007peer} and \cite{vlavianos2007bitos}. In this strategy we have what is called either a high priority set or a sliding window. In this report we will discuss it in the terms of \cite{vlavianos2007bitos}.

The strategy takes root in the fact that we want to be sure we always have the pieces we are about to watch. In theory, what we do is we have 3 different sets. A set of received pieces, one with our high priority set, and a set with all the rest of the pieces as seen in Figure \ref{fig:sliding}. When you start downloading the set of received pieces will be empty and our high priority set will consist of the first k sequential pieces of the movie we want to watch. We then use the rarest first selection strategy on the pieces in our high priority set to determine what piece to get. When that piece have been requested and downloaded, we move it to our set of received pieces, and insert the first sequential piece from our remaining set and insert that in our high priority set. This gives us the effect of a sliding window that slides along as we get more pieces.
\begin{figure}
\centering
  \includegraphics[width=0.5\textwidth]{SlidingWindow}\\
  \caption{Sliding window visualized}\label{fig:sliding}
\end{figure}
\\

An optimization discussed in the literature is the possibility of randomizing whether you pick a new piece in the high priority set or the remaining set. This could further be optimized in the sense that you could change the possibilities based on how many pieces you are ahead, in the sense where ahead means that the number of pieces that are the next ones to be watched is sufficiently big. In this report we have only looked at the case where we always pick a piece from the high priority set, and we refer to the testing of randomization as future work that could be interesting.
\\

In the implementation we have our high priority set which has a given size. Whenever we need to choose a new piece to download, we check to see if the high priority set is full, and if it have free space we fill it. We then choose what piece in the set to download by using rarest first, and when a piece is downloaded it is removed from the set. 

\subsubsection{Segments} \label{sec:segments}
The second of our tested strategies originates from \cite{annapureddy2007high}. Annapureddy et al. describes and tests a couple of different strategies, ranging from naive piece picking strategies like a purely sequential strategy, but also explores a more advanced strategy. This advanced strategy is the one we have chosen to implement.

The approach proposed in this strategy is to split a file into a number of segments, where each segment holds an equal amount of pieces. When you start downloading, the strategy chooses the first two sequential segments and starts downloading from them using a biased coin to decide which segment to request pieces from. In the proposed strategy the biased coin will choose the first segment with a probability of 90 percent and the second segment 10 percent of the time. This form of strategy is called pre-fetching, and should perform consistently well according to \cite{annapureddy2007high}. When a segment has been chosen, you can select a piece from that segment using any strategy you want, but in our case we have chosen the rarest first strategy to be consistent with the other strategies. A visualization of this strategy can be seen in Figure \ref{fig:segments}.
\begin{figure}
\centering
  \includegraphics[width=0.5\textwidth]{Segmenter}\\
  \caption{Segments strategy visualized}\label{fig:segments}
\end{figure}

When the first segment is done downloading, the segment window slides one step, and the second segment will now have 90 percent chance of being chosen and the third segment 10 percent. This continues until all the segments have been downloaded.
\\

The intuition for using pre-fetching is seen in the benefits of the system as a whole. It should be easy to argue that a peer would benefit more from only choosing pieces from the next immediate segment is needs to stream, and this is true in some cases. The fact remains though that when you pre-fetch you help distributing the load of other peers because they can help delivering the needed pieces to peers that are watching a part of the file further along then itself. In essence pre-fetching should help make segment transition smoother for the entire system, with the only cost of a few peers getting a slightly lower performance.
\\

Our implementation uses two sets to hold the two segments. We make a random choice about which set or segment to download a piece from, and when the first segment is empty we copy all the pieces from the second segment into the first, and fill the second segment with the next segmentsize number of pieces.

\subsubsection{Alternative strategies} \label{sec:alternative}
The two strategies described above was chosen from a long list of possible strategies, and here we give some motivation for choosing these instead of the other strategies proposed in the literature.
\\

If we look at one other type of strategy we looked at and one of the concrete implementations using that we can take BASS \cite{dana2006bass} as an example. In these kind of strategies you always get the first k pieces from the server and while you are downloading them you download the rest from other peers using different strategies like rarest first. One thing that makes this strategy really efficient is the fact that you should be able to ensure that you will always have the next piece you want to watch. A visualization can be seen in Figure \ref{fig:server}.

\begin{figure}
 \centering
  \includegraphics[width=0.5\textwidth]{ServerSequentiel}\\
  \caption{Server based strategy visualized}\label{fig:server}
\end{figure}
Our major goal in this report is to explore the possibility for having less server capacity and bandwidth and in that way save money. One could argue that a solution like BASS would give us what we were looking for, but such a person would not be looking at the big picture. While it is true that if the people who wanted to stream the data were spread over a lot of time you would not need as much bandwidth, it is also the case that if you are experiencing flash crowds you would still need the same amount of server capacity and bandwidth to be able to service everybody.

We chose to stay clear of the kind of strategy that uses the server in any direct way, and instead focus on strategies that utilize the server as a seeder or supernode, and in that way decrease the needed amount of servers and bandwidth.
