\section{Evaluation}
\label{sec:evaluation}
\begin{comment}
An evaluation of the system showing off its strengths and weaknesses.  This can include (but is not limited to) performance numbers such as throughput or operations per time under various conditions; measurements of overhead in your data transmissions; behaviour under pressure such as churn.  Establishing that your system \emph{works} is the very basic evaluation---describing the \emph{behaviour} of your system using \emph{quantifiable} data is essential.  Multiple runs with averages and standard deviations are encouraged.  Gnuplot\footnote{\url{http://www.gnuplot.info/}} is a popular and powerful tool for graphs.

possible graphs:
- Avarage time for downloading the entire movie (throughput)
- Do the chunks arrive before they are needed (goodput)
- How many chunks arrive before a specific playback deadline? (60 s.) How does segment and window size influence this? (throughput)
\end{comment}
In the following section an evaluation of the system will be performed firstly in a controlled environment on a local computer, secondly in use with a real tracker to download a movie.

\subsection{Test environment} \label{sec:test_environment}
The tests are performed on one laptop hosting a simple tracker and four peers. One of these peers is a so-called super peer, meaning that it is the only peer containing the movie at the beginning. This super peer is equivalent to a server in a normal VoD system. The three other peers start-up with one second intervals, and request the video as if they were watching it from the beginning. This means the tests are performed as if it was a small flash crowd requesting the same video.

To make the tests more realistic a small delay is inserted before each message is sent/received, thereby simulating a more realistic upload and download speed. The delay on sending messages is always larger than when receiving messages, thereby simulating an asymmetric internet connection. The peers simulated bandwidth differs from each other, but remains consistent throughout all the tests.

\subsection{Use of strategies in test environment} \label{sec:test_environment_strategies}
In the following section the sliding window and the segment strategy will be investigated and compared with the rarest first algorithm as a reference point. The algorithms will be evaluated firstly by studying the throughput of the two algorithms by looking at the amount of chunks they get within the first 60 seconds. Furthermore the window size of the sliding window algorithm will be changed until an upper limit is found. The segment size in the segment algorithm will also be changed until an upper limit is found. The upper limit will be used in the evaluation later on.

The average download time of the strategies will also be calculated, again as a means for analyzing throughput.

As a mean for evaluating the delay (goodput) until a peer can begin viewing a movie, the average time until the first chunks from 0,..,20 are received is calculated.

Lastly overhead will be investigated and reflected upon.

\subsubsection{Sliding window strategy} \label{sec:test_sliding}
The sliding window strategy works as described in \ref{sec:sliding}. Figure \ref{fig:sliding_60} describes how the sliding window algorithm reacts to different window sizes within the first 60 seconds it is running. It is clearly seen that a sliding window of size 10 is the most optimal window size since it gives the most throughput. The reason throughput is lower with a window size of five is that this will result in the three peers, with a high probability, requesting the same chunks, which gives a lower throughput. On the other hand with a window size greater than 10 throughput also becomes lower, since this will result in the rarest first algorithm having to analyze a greater collection of chunks.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{images/sliding_60.jpg}
\end{center}
\caption{Sliding window results within the first 60 seconds for different window sizes.}
\label{fig:sliding_60}
\end{figure}

Another important parameter other than throughput is latency (goodput), meaning the time a peer has to wait until it can start watching the video. This is evaluated by calculating the time it takes on average for the three peers to download the first 20 chunks, see Figure \ref{fig:sliding_latency}. It is seen that even though a sliding window size of 10 results in the highest throughput it also results in the highest latency. With a window size of 5 the peer may start watching the video a long time before a peer with a greater window size, which is caused by the fact that there are fever other chunks to choose from with rarest first, which increases the probability of the correct chunk being picked. A windows size of 5 is however also closer to running a pure sequential strategy, which will result in a very low throughput as described in \cite{annapureddy2007high}. It is however hard to explain why the latency decreases slowly when the window size grows from 10 to 30. This may be due to the fact that the evaluation results are based on the average of only 3 peers.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{images/sliding_latency.jpg}
\end{center}
\caption{Sliding window time to download the first 20 chunks for different window sizes.}
\label{fig:sliding_latency}
\end{figure}

There is however no doubt that the algorithm works, when looking at the rarest first algorithm which reaches a total of 1108 seconds before it has the first 20 chunks, which compared to sliding windows worst case of 78 seconds is quite high. Although this does not show a true comparison of the total download time, it gives an idea that you get a more sequential download by using the sliding window. And sequential means you are able to watch the movie during download.

The total download time is also highly influenced by the different window sizes, as seen in Figure \ref{fig:sliding_totaltime}. Not surprisingly it is a window size of 10 which finishes the file first, since this is the one running with the highest throughput.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{images/sliding_totaltime.jpg}
\end{center}
\caption{Sliding window time to download the entire file for different window sizes.}
\label{fig:sliding_totaltime}
\end{figure}

Another important evaluation criteria with regards to the algorithm is overhead, meaning the amount of messages sent through the network. Overhead for a window size of 10, since this delivers the most throughput:

\begin{itemize}
 \item 3: 14705
 \item 2: 13652
 \item 1: 27969
 \item s: 53912
\end{itemize}

It is clearly still the super peer(s) which contributes most to the network, however the other peers still contribute a bit, which could be relevant in a situation with a huge flash crowd of more than three peers \cite{vlavianos2007bitos}. This supports why it is relevant to use P2P technology in VoD systems since flash crowds will occur in real world scenarios. \cite{yu2006understanding} explores the tendency for flash crowds, and the expected amount of flashes during a movies uptime. Their results show us that a movie will usually get flash crowds during the first couple of days it is up, and then the number of viewers will steadily decrease. Seeing as P2P will probably be a bit less effective than a purely server based  system if only one person is streaming the video, one could explore the possibility of only using P2P during the first time when flash crowds usually occur. This might be a mistake though since technologies like Twitter and Facebook makes it easy for people to share links to old videos and flash crowds can spring up again. This discussion and eventual test is referred to as future work that could be interesting to pursue. 

Notice that peer 1 which most likely finished first in this case has contributed more than peer 2 and 3 which finished later.

\subsubsection{Segment strategy} \label{sec:test_segment}
The segment strategy works as described in \ref{sec:segments}. Figure \ref{fig:segment_60} describes how the segment algorithm reacts to different segment sizes within the first 60 seconds it is running. It is clearly seen that a segment size of 20 is the most optimal segment size since it gives the most throughput. As with sliding windows throughput decreases when it becomes smaller or larger than 20, most likely for the same reasons.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{images/segment_60.jpg}
\end{center}
\caption{Segment results within the first 60 seconds for different segment sizes.}
\label{fig:segment_60}
\end{figure}

The segment algorithm has one of the lowest latencies when it has a segment size of 20, as seen in Figure \ref{fig:segment_latency}, which also is the size giving the most throughput, meaning that it differs from the sliding window algorithm at this point. When the algorithm has a window size above 20 it slows down a lot, so much that latency increases drastically. You might say that the segments algorithm is highly dependent on a high throughput.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{images/segment_latency.jpg}
\end{center}
\caption{Segment time to download the first 20 chunks for different segment sizes.}
\label{fig:segment_latency}
\end{figure}

Like the sliding window algorithm the segment algorithm drastically lowers the latency compared to rarest first.

The total download time is also highly influenced by the different segment sizes, as seen in Figure \ref{fig:segment_totaltime}. Not surprisingly it is a segment size of 20 which finishes the file first, since this is the one running with the highest throughput.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{images/segment_totaltime.jpg}
\end{center}
\caption{Segment time to download the entire file for different segment sizes.}
\label{fig:segment_totaltime}
\end{figure}

Overhead for a segment size of 20, since this delivers the most throughput:

\begin{itemize}
 \item 3: 23309
 \item 2: 23834
 \item 1: 25908
 \item s: 70589
\end{itemize}

As with sliding window it is still the super peer which contributes the most.

\subsubsection{Comparison} \label{sec:test_comparison}
From the analyzes there is one major difference between the two algorithms, the segment algorithm is clearly more dependent on throughput than the sliding window algorithm, which could be because it requests pieces more sequentially than the sliding window algorithm, making it more dependent on having a good throughput.

Furthermore it seems as if the sliding window algorithm is a bit better at spreading the load across the network than the segment algorithm, which seems to depend a lot on the server. The segment algorithm also generates more messages than the sliding window algorithm. However rarest first still seems to generate a lot more messages in the network than the other two algorithms, which could have something to do with the fact that the algorithm generates a lot of messages to find out which chunks is the rarest. The number of messages for the rarest first algorithm is:

\begin{itemize}
 \item 3: 145240
 \item 2: 159134
 \item 1: 128452
 \item s: 419732
\end{itemize}

Overall the sliding window algorithm has a higher throughput, but the segment algorithm can view the video with a lower latency than the sliding window algorithm, the relevant question is however whether or not the lower throughput will cause involuntary pauses in the video later on.

\subsection{Use of strategies with real trackers} \label{sec:test_real_trackers}
Since the basis of our report is to use BitTorrent in VoD streaming we have only made some ad-hoc test as to how well the strategies work in an environment with normal trackers and people using other strategies.

These tests were comprised of us trying to download a regular movie from an established tracker and see if we could start watching the movie as though we were streaming it. A problem doing this stems from the implementation of Jbittorrent and doesn't really have anything to do with the strategies. Jbittorrent only connects to one tracker with HTTP so if the first tracker on the list of trackers in the .torrent file doesn't work or requires you to use UDP it can't connect. This and the other problems described in section \ref{sec:jbittorrent} makes for a rather slow download rate so as the implementation is now you have a hard time watching it without pauses, but the point is that the strategies work and you are able to watch the movie before it is done. With other words, if the strategies got implemented in a client that was properly implemented and lived up to the BitTorrent protocol you would be able to stream movies from regular torrents.  