\section{Evaluation}
I present here the evaluations of my implementation. The evaluations
are all done on a MacBook Pro with OS X 10.5.8, 2.16 GHz Intel Core 2
Duo with 3GB DDR2 SDRAM. I have logged both speed and
communication. All test runs are made with a circuit consisting of $n$
input, constant and output gates and with the number of multiplication
gates varying. All figures are split up into two where the left shows
the total amount of bytes send, and on the right is the speed of the
program. To correct for the program running on a single machine, I
divided the speed by the number of players, since that is
approximately the time a single player would spend (excluding delay
from sending packages over SSL as would happen in a real life
scenario). The evaluation is unfortunately greatly limited by my
implementations ability to function at high values of $n$ and large
circuits. There is therefore a tradeoff between the number of players
and the size of the circuit. As of now, the implementation can handle
only up to $n=15$ and working with a small circuit, or $n=5$ and $75$
multiplication gates.

\begin{myremark}
  Note that I did not implement broadcasting, which ought to increase
  the communication overhead by a factor of $n^2$ as opposed to the
  current $n$ (just sending values instead of broadcasting them).
\end{myremark}

I chose to vary the following variables: 
\begin{itemize}
\item $n$ -- The number of starting players.
\item $c_M$ -- The number of multiplication gates
\end{itemize}

I could also have varied $t$ in hopes that this would enable the
program to handle more than 15 players, but this is not the case,
hence I saw no point in doing this as it proves nothing new.

\subsection{Varying $n$ for a small circuit}\label{eval1}
We would for the small circuit\footnote{See section \ref{circuits} for
  explicit information.} expect to see a line fitting with the
function $f(x)=ax^3+bx^2+cx+d$ as the dominating term in theory would
be $O(n^3\kappa)$.

\figtwo{bytes_small.jpg}{bytes_small}{Total bytes send for small
  circuit}{speed_small.jpg}{speed_small}{Speed of the program for
  small circuit}{\capname{small circuit}{Tests for the small
    circuit.}}

The two figures \reff{bytes_small} and \reff{speed_small} are made
using the small circuit with just 2 multiplication gates. As seen on
figure \reff{bytes_small} the line is not fitting any clear
function. This can partly be explained by the fact that some $n$
causes player elimination, maybe even multiple times, while others do
not. The ones who does not require player elimination is
$n=\{5,6,9\}$. This explains the jump from $n=6$ to $n=7$. Another
point that looks a bit off is $n=10$, but at this point we do player
elimination twice, which makes the protocol use only the same amount
of communication as $n=6$ once we get rid of 4 players. The same image
for time spend shows itself for figure \reff{speed_small} which goes
to show that player elimination works as intended. The theory states
that we ought to communicate $O(n^3\kappa)$ bits when excluding the
cost of the gates, but this is because of broadcasting which I did not
implement. Thus, the cost is only $t'n'\kappa=O(n^2\kappa)$, as I just
send a value when it should have been broadcast. This explains why it
almost look linear and are only slightly inclined towards a
polynomial. Unfortunately I do not have enough results for larger $n$
to see the polynomial increase. Also, as $\kappa$ is the bit-length of
a field element, this increases with $n$ since the field size
increases. The internal Java representations of the integers I send
around might therefore be different with larger $n$. One could view
$\kappa=log^2(n)$ which makes the total cost $O(n^2log^2(n))$. Either
way, it seems as if the protocol is highly efficient at small $n$
which is probably the most common way it would be used in a real world
application. As for the speed of the protocol, it would be even faster
if the generation of triples could happen in parallel instead of
sequentially, but as of now it looks somewhat linear which is a good
sign, but nothing conclusive can be said from the low values I had to
use. 

\subsection{Checking the communication linearity}
The theory's grand result states that the communication cost should be
linear in $n$ for the amount of multiplication gates in the circuit
($O(nc_M)$), which differs from previous papers that was only
polynomial in $n$\footnote{Other papers such as \cite{mpc2} also have
  this result, but no one could boast of doing this with perfect
  security until now.}. I plan here to show how my implementation
fares by keeping $c_M$ fixed at a high number (as high as possible for
my implementation anyway), and then varying $n$. The highest value of
$c_M$ possible when still being able to run with $n=12$ is
$c_M=25$. Any higher than this, and the maximum size of $n$ is going
to decrease. The only reason this experiment makes sense is because of
the results of section \ref{eval1} which states that my implementation
is almost linear in $n$ since the overhead is actually $t'n'\kappa$
and not $O(n^3\kappa)$. This means that even at low values of $c_M$
such as $25$, this term is going to dominate the communication cost.

\figtwo{pbytes_cm_25.jpg}{pbytes_cm_25}{bytes send during preparation
  phase}{pspeed_cm_25.jpg}{pspeed_cm_25}{time spend during preparation
  phase}

The above figures show the bytes send and time spend during the
preparation phase. The theory states that the bytes send should be
linear in the number of players. Based on the results it is hard to
say something conclusive, but I have a few points that affect the
results. The theory says that the communication should be
$O((c_M+c_I)n\kappa + n^3\kappa)$, so if not $c_M>>n$ the
communication is more influenced by the overhead term $n^3\kappa$ than
the number of multiplication gates. This can be seen in the case of
$c_M=25$ and $n=12$, where we get $c_Mn=300$ against $n^3=1728$, which
is an even greater number if we include $\kappa$. Thus, to actually
make a fair evaluation of the theory, we really need
$c_M>>n$. However, in the case of this implementation as previously
mentioned, the overhead is actually $t'n'\kappa$. Thus the overhead
number changes to $t'n'\kappa=3\cdot 12\kappa=36\kappa$. This means
that if the amount of multiplication gates is linear in $n$, it will
show on the graph as we can see.

We can even explain the jumps in the graph: at $n=4$ we do player
elimination which causes us to send significantly more than if the
protocols where robust. This is the case for $n=5$ which explains the
decrease there. At $n=6$ we also run with robust protocols, but at
$n=7,8$ we again to player elimination. At $n=9$ we can again do
things robustly, and from that point on we do player eliminations. It
is not all bad for the communication to eliminate players as we will
see in this next figure:

\figtwo{cbytes_cm_25.jpg}{cbytes_cm_25}{bytes send during computation
  phase}{cspeed_cm_25.jpg}{cspeed_cm_25}{time spend during computation
  phase}

We now look at the computation phase in details. The theory states
that the computation phase should spend
$O((c_In+c_Mn+c_On+D_Mn^2)\kappa +c_IBA(\kappa))$ bytes on
communication. The results are very inconclusive, and it is mainly
because of the small $c_M$. Had I the option of going to a higher $n$
and raising the value of $c_M$ I recon it would be possible to
conclude something. It would appear as if the secret to the results
lie in the small details and coefficients to the terms of the
$O$-notation. The jumps in communication and the decrease in speed for
higher $n$, can be explained with the same explanation as for the
preparation phase. For $n=\{5,6,9\}$ we do not do player eliminations
and we thus need to communicate to the full number of players in the
computation phase. For the rest though, we communicate with at least
$2$ players less which obviously will communicate less and take less
time. For $n=10$ we eliminate twice, making the cost the same as $n=6$
in theory as well as for this experiment. The jump from $n=5$ to $n=6$
in the speed graph though, is a mystery as it should not decrease, but
this might be an indication towards not dividing the time spend with
the number of players as this might be an incorrect assumption.

\figtwo{tbytes_cm_25.jpg}{tbytes_cm_25}{bytes send in the combined
  phases}{tspeed_cm_25.jpg}{tspeed_cm_25}{time spend in the combined
  phases}

The total communication cost and time spend looks pretty linear, but
again I suspect this is not a fair measurement of the linearity of the
protocol with so low values. Still, it is a good indication that the
theory might be true, and the apparent linearity in the speed of the
protocol is also a positive result, as it shows that at least for
small numbers, the protocol runs in linear time. It even looks like
the speed increases very little with every $n$, but one has to keep in
mind that delay from a network would have to be added.

\subsection{Showing linearity in $c_M$}
To show the linearity in $c_M$ in the implementation I here show a
graph measuring the bytes send and the speed for an increasing $c_M$
with $n=5$. It is not a new result that this should be true, but it is
important that the implementation can handle an increase in the number
of multiplication gates in linear time. It can be viewed as a sanity
check that has to be true in order for all other results to hold.

\figtwo{bytes_mult_vary_5.jpg}{bytes_mult_vary_5}{Total bytes send for
  circuit with varying
  $c_M$}{speed_mult_vary_5.jpg}{speed_mult_vary_5}{Speed of the
  program for circuit with varying $c_M$}{\capname{vary_5}{Tests for
    varying $c_M$ with $n=5$.}}  

We can here see pretty good evidence towards linearity of $c_M$. It
increases linearly as it ought to, and even with a coefficient of
determination of $R^2=1$ which cannot be better. It is not proof
enough, as a real circuit would have way more gates, but I can say
that it is a good indication. Also, as a side note, the speed seems to
follow a linear pattern as well which bodes well for circuits
containing quite a lot more gates.
