\section{Implementation}
In this section I intent to give some general information about the
structure of the code, how I handle corruption as well as interesting
details about the code. I will walk through the protocols once again,
but this time only discuss implementation issues. Finally, I will talk
about the used frameworks and the largest problems and bugs I
encountered during the implementation phase.

\paragraph{Synchronization:} Because of the fact that \cite{mpc1}
indirectly states that all players should send something when they are
supposed to, I chose to make the global assumption that players send
messages when they are supposed to. This need not be the case of
course, and if we do not make this global assumption, an ``easy'' fix
to this problem is to say that one is corrupted if he does not answer
within a given time frame. However, This requires the use of
synchronized timers which I did not implement as it is a hard problem
in itself to make sure that each player has synchronized his timers
with all others. I chose instead to have a central component in order
to synchronize the players who makes sure everyone are ready for the
next step in the protocol. I cannot use my central component to ensure
that everyone sends when he should, both because it still requires
timeouts and because it would be cheating since no central component
really ought to exist. Thus it is only used to synchronize the
players. Synchronization is important because of race conditions. If
one player starts a sub-protocol before the others are ready, then
there is a high risk of protocol failure. Just one example would be
the messages logged for the player elimination. In my program, when a
player receives a message it gets logged, so if suddenly the next step
is begun before all players where done sending in the previous step,
an error will occur later.

This central component also means that my program cannot be run on
more than one machine, but a fix to that could be implementing
synchronization by having everyone send \textit{ready} when they where
ready for the next step. When everyone has answered, you could go on
safely without thinking of race conditions. This would avoid having
synchronous timers, but would not ensure that corrupted players
answers when they should. That would still have to be assumed since it
requires timeouts to ensure that all is either answering or is to be
considered corrupt. Thus this fix would only help getting rid of any
central components. It would also increase the amount of communication
done as such a ready protocol would require $O(n^2)$ bits. 

\paragraph{Broadcasting:} Another important point is that I
intentionally left out implementing broadcasting and consensus
(Byzantine Agreement). It would take too long to create a sub-protocol
that could simulate broadcasting, and is not really needed for showing
the theory of \cite{mpc1}. On a side note it actually helped showing
the theory by not implementing it, but this is covered later. As
mentioned in section \ref{broadcasting}, broadcasting and consensus
can be simulated by communicating $O(n^2)$ bits. Instead, anywhere in
the protocol where it states that a player broadcasts a value, I
merely send the value to all players with the restriction that both
honest and corrupt players send the same value to everyone. This
amounts to $O(n\kappa)$ instead.

\paragraph{Corruption:} The way I handle corruption of a player is via
a flag which is set when the players are initialized. If a player has
an ID larger than $n-t$, he is initialized as corrupt. This means that
whenever they where to send a corrupt value in \texttt{ReconsPriv}, he
sends the correct value plus one. The same happens for
\texttt{ReconsPubl} when sending back the reconstructed values. I
intentionally do not send corrupted values at the
\texttt{DoubleShareRandom} protocol, since this can never be robust
and thus guarantees that an honest player gets unhappy if he received
just a single corrupted value. I think it is more interesting to check
if the robust methods works when using Berlekamp-Welch. I also did not
include a probability for the corrupted players to actually corrupt,
since I would like a deterministic run that I can repeat. Other than
those two locations in the code, a corrupted player acts just like an
honest player. This means it is not a perfect adversary since such a
one should send unhappy during the \texttt{FaultDetection} protocol to
ensure player elimination and thus slow down the computation the
most. However, not doing that means that there are still corrupted
parties left to try and disrupt the computation phase. This is needed
in my implementation as it cannot handle $t=0$. One could also argue
that this complicated system would not be needed for $t=0$, as there
would be no attempts at cheating. This is the reasoning for my design
choices. 

\subsection{System description}
I do not intend to walk the reader through the code step by step,
rather give an overview in the form of a UML diagram, and then dig
deeper into some of the interesting aspects of the code:

\fig{0.3}{uml.png}{uml}{UML diagram of the code structure}

\begin{itemize}
\item \textit{Controller:} The main class of the project is the
  Controller. This is the one who starts everything up and initiates
  preparation and computation phase. I need this central component in
  order to synchronize the players and keep track of when they are
  done with the different steps and ready to move on to the
  next. 
  
  You can also give a few important commands via the terminal. You can
  exit the program or check how many bytes was send in total which
  also includes the users and eliminated players.

\item \textit{SharingProtocols:} is responsible for several protocols. It holds
  lagrange interpolation, the sharing protocol as well as double
  sharing in the primal form with no checks made, reconstruction
  towards a single player, part of the reconstruction towards the
  public and Berlekamp-Welch.

\item \textit{DoubleShareRandom} and \textit{GenerateTriples:} takes care of the
  protocols with the same names. GenerateTriples calls
  DoubleShareRandom thrice, lets the Player handle the reconstructions
  and then does the final computations. DoubleShareRandom 

\item \textit{HIM:} is short for Hyper-Invertible Matrix and stores such a
  one. It handles the construction of the HIM and the linear
  combination of applying a vector to the matrix.

\item \textit{Player:} The Player class is the focus of the project. The player
  controls the actual protocol, utilizing the helper classes HIM,
  DoubleShareRandom, GenerateTriples and SharingProtocols as
  needed. It has a server attached so one can send messages to it. The
  Player is responsible for part of the execution of the protocol
  \texttt{ReconsPubl}, the preparation phase after the triples has
  been generated as well as the computations needed in evaluating a
  multiplication gate.

\item \textit{PlayerServer:} The server is responsible for starting up servants
  to handle incomming requests. This is done to make sure the server
  is ready to handle multiple incomming requests.

\item \textit{PlayerServerServant:} Each incomming connection is
  pushed to a PlayerServerServant taken from a pool of threads who
  then processes the message. It is responsible for interpreting the
  incomming codeword that tells what part of the protocol this message
  belongs to. It uses the codeword to then call the correct method on
  it's player.

\item \textit{MessageList:} For keeping track of the messages received as well
  as the random choices made during the generation of triples, we have
  the MessageList class. It is serializable, and contains lists of
  messages received from the different players. It also contains the
  random choices made, stored in a list. It also holds a convenient
  method for getting the next message from a certain player while
  internally keeping track of where it has gotten to.

\item \textit{User:} The User is a variant of the player in the way that the
  player could do the work of the user, but only if all players where
  honest. The users are there solely because we want $n$ input and
  output gates. They therefore has responsibility for doing the work
  on the input and output gates. 

\item \textit{CircuitController:} Once the preparation phase is done,
  we have to handle the computation phase using a circuit. This is the
  responsibility of the CircuitController. It holds the circuit to be
  evaluated and all players are connected to it. I need this to be a
  central component for the same reasons as with the Controller. Each
  player has it's own individual local circuit though, which consists
  of gates of the 5 different types. Important to note is that players
  who are eliminated in the preparation phase, does not have any
  possible way of updating their circuit. With the users there,
  however, we can update that users input and output gate.

\item \textit{Circuit:} The Circuit class itself has the responsibility of
  maintaining the circuit. This includes adding gates as needed, as
  well as removing them when they are no longer used. That is the idea
  anyway, but since I did not get to construct a proper framework for
  large circuits, I currently have no method for checking if a given
  gate is pointed at by another gate and should therefore not be
  removed. Thus, a call for removal of a gate, does not delete it from
  the circuit. The reason this would be a good idea is that Java's
  garbage collector would clean up the gates no one points at, thus
  clearing memory. If the circuit is large enough, this would be a
  desirable property since we would need memory.

\item \textit{Utility:} A global class responsible for holding and maintaining
  all global variables such as $n$, $t$ and the codewords used for
  sending messages.

\item \textit{Gate:} The Gate class is the interface that all other
  gates implements. Each gate is handled differently, but they all
  have a share that can be extracted once the gate is evaluated. The
  gates have no responsibilities, and are thus merely seen as
  representations of gates. They cannot evaluate themselves, as we
  need to evaluate several multiplication gates at the same time.
\end{itemize}

\subsection{Code walk through}
I will here go through each of the protocols again, but this time
focus only on the implementation aspects of them. Some are
straightforward and will therefore be handled in less depth. 

First I will explain some of the general code used by all of the
protocols.

\paragraph{Communication:} The way the players (and users) communicate
is via Sockets. Whenever a player (or user) needs to send a message to
another, he calls the method \textit{sendMessage()} which takes as
input a codeword, a list containing the message(s) he wants to send
and the ID of the player he wants to send to. The codeword can be one
of several choices and is interpret by the receiving players
server. The codeword tells about the length of the message list and
what to do with the received messages. Besides sending the messages,
\textit{sendMessage()} also saves the amount of bytes send. That way I
can afterward check the total number of bytes send between players and
users.

The reason for choosing this way of communicating is because of the
ease of which it could be transferred to several machines, and because
it enables an easy way of registering how many bytes was send via the
DataOutputStream. It also enables me to send Objects through the
ObjectOutputStream as long as they are serializable, which has proven
very useful. The downside of using sockets is that you cannot have an
infinite number of sockets running at the same time at a single
machine, and when simulating larger numbers of players, socket
exceptions become more and more common.

\paragraph{Race conditions and locks:} When running a large amount of
threads on the same machine that share code such as a player and his
server, strange things can happen if one is not careful such as a list
being cleared by one thread while another was using it. This can be
avoided with the use of locks. I chose to have a universal lock
located in the Utility class since this is accessible to all
classes. Whenever something wants to interact with sensitive code that
could cause race conditions, it has to acquire the lock before doing
so, releasing it only after it finished interacting. I tried using the
``synchronized'' keyword instead, but this resulted in a deadlock
which is why I ended up with using a lock. 

Now I will present what choices I took when implementing the different
protocols, and why I came up with these choices:

\begin{protocol}
  \texttt{Share}$(Dealer\ P_D \in (P\cup U), s, d)$:

  This was done by picking $d$ coefficients at random using Java's
  SecureRandom class which generates cryptographically strong
  pseudo-random numbers, and then just evaluate the resulting
  polynomial for each $i=1,\ldots,n'$, sending the result to $P_i$. I
  used the normal Random class for testing purposes as I could get
  deterministic results from this.
\end{protocol}

\begin{protocol}
  \texttt{ReconsPriv}$(Receiver\ P_R \in (P\cup U), d, [s]_d)$:

  This is one of the more interesting protocols since it gave so much
  trouble even though it is so easy to explain in theory. Sending each
  players shares to the receiver is not the problem. It is the check
  that took some time getting right. We need at least $d+t'+1$ of the
  received shares to lie on a degree-$d$ polynomial. Now, we can
  safely assume that we receive $n'$ shares in total\footnote{Due to
    our global assumption all players always send when they
    should}. If it happens that $d=2t'$ and $n'\leq 2t'+d$, then we
  need all shares to lie on the same polynomial. Since my lagrange
  interpolation implementation does not give back a polynomial, but
  only the polynomial evaluated in a given point, I need to call my
  lagrange interpolation $n'$ times to evaluate each point and compare
  the result to the corresponding share. This means that if just one
  corrupt party decides to cheat, we will be unhappy. We can utilize
  Berlekamp-Welch (which needs $n'>d+2t'$ to work) if $d=t$ or if
  $d=2t'$ and $n'>d+2t'$ (which is the case for e.g. $n'=5, t'=1$),
  since we have enough shares and low enough corruption to do BW which
  gives us a polynomial back which we can extract the secret from as
  the first coefficient.

  \paragraph{Berlekamp-Welch:} This was a tricky algorithm to get
  right. It takes as input the degree of the resulting polynomial and
  the shares needed to do the algorithm. In order to get the solutions
  to the system of equations, I used the framework JScience. This
  provides me with a class that takes a matrix as input and does LU
  decomposition using a modulus which is another way of doing Gaussian
  Elimination. I implemented BW as described in section \ref{BW}, and
  got to the point where I had to divide the two polynomials $G$ and
  $E$. This was not supported by any framework I could find, so I had
  to implement that myself. I implemented polynomial division by using
  the following pseudocode from \cite{long-division}. Any divisions
  should in theory always should have a remainder of $0$ since it is
  provable that $f(X)$ can be written as $G(X)/E(X)$ \footnote{see
    section \ref{BW}}. Therefore I chose to treat a remainder
  different from $0$ as a non-fatal error, noticing the user of the
  problem. Having polynomial long division implemented, I continued on
  to testing the algorithm, and a problem came up: If there are not
  exactly $t'$ errors, it is impossible to solve for a solution to the
  system of equations. The reason is that no unique solution exists
  because we have at least one free variable. I personally would have
  just implemented the solver by fixing those free variables and just
  give any solution, but I settled with introducing one error at a
  time rerunning the algorithm until it was successful. As I do not
  know how many corrupted shares are already present, It would not be
  correct to introduce $t'$ errors at once, which is why they are
  introduced one at a time. This can only be done since I know a
  solution exists. The last unforeseen error occured when $G(X)$
  became the $0$-polynomial. This of course means that $f(X)$ is also
  the $0$-polynomial, but I had not taken this into
  consideration. Easily fixed though since checking if $G(X)$ is 0
  comes down to checking if the degree of $G(X)$ is $-1$ and if the
  lone coefficient is 0.
\end{protocol}

\begin{protocol}
  \texttt{ReconsPubl}$(d, [s_1]_d,\ldots,[s_T]_d)$:
  
  For this protocol, step 3 is the most interesting to look at. Though
  the first steps are also noteworthy, it is the very last step I will
  focus upon. First though, I needed to create a polynomial
  $u(\beta_j)$ where $j$ is the index of a player. This was done as
  described. Recall that $\beta_j = j$ since we work in the field
  $\w{Z}_p$. The share sent to $P_j$ for reconstruction is merely
  $u(j)$. When everyone has reconstructed their share of $u$ and send
  them to all $P\in\w{P}'$, each player either run them through BW
  which gives back a polynomial, or become unhappy if they receive
  unhappy. If we are happy, BW can always be done, since the degree of
  $u$ is $T-1$, and BW works if $n'>d+2t'$. Given that $d=T-1$ (recall
  that $T=n'-2t'$), we get $n'>(n'-2t'-1)+2t'=n'-1$ which is obviously
  always true. The resulting polynomial can now be used to compute $T$
  $(T-1)$-consistent shares. I use these to compute $s_1, \ldots, s_T$
  by doing the following: Since we know the shares are consistent, we
  can use lagrange interpolation to get hold of $s_1$ -- recall that
  the polynomial has the form $u(j)=s_1+s_2\cdot j +\ldots+s_T\cdot
  j^{T-1}$. Then $P_j$ subtracts $s_1$ from every share and divide
  every share with $j$. This is the same as subtracting $s_1$ from the
  polynomial $u$ and dividing it by $j$. Now we can do lagrange again
  to recover $s_2$, now we subtract the newly found $s_2$ from all
  shares, divide by $j$ and recover $s_3$. This process is repeated
  until $s_T$ has been recovered. The only reason I generate
  consistent shares and do computations on all those points instead of
  the polynomial itself, is simply because I had a working
  implementation that used points and not polynomials. I could instead
  do what is proposed in the theory about this protocol and do
  computations on the polynomial, but the result is the same.
\end{protocol}

\begin{protocol}
  \texttt{DoubleShareRandom}$(d,d')$:
  
  The two calls to \texttt{Share} are simple and straightforward. I
  implemented it such that you first choose a random secret, then
  compute $P_i$'s share with degree $d,d'$ respectively and then send
  each resulting pair together in the same message to $P_i$ instead of
  calling \texttt{Share} twice. The result is the same, except you
  save a little communication overhead.

  Applying M to $[s_i]_{d, d'}$ was done by first applying the M to
  $[s_i]_d$'s then to $[s_i]_{d'}$. When a player has done this
  locally, he could in principle continue if he does not have an index
  $i>T$. This would trigger the next step in the protocol before
  everyone was ready though, so I included a synchronization message
  that everyone sends once they are ready for the next step. This is
  done to avoid race conditions as well as overloading the central
  component more than necessary, and makes it easier to transfer to
  several machines. Only when a player have received ready from all
  others, he continues the protocol.

  For the check we do not do error correction, since we are not
  interested in the secret, but merely if all players have been
  honest. Therefore this can be done using standard Lagrange
  interpolation for all points $i,\ldots, n'$ as also described at
  \texttt{ReconsPriv}.
\end{protocol}

\begin{protocol}
  \texttt{GenerateTriples}$()$

  For the generation of the triples, I created a thread which takes
  care of it for the player. The idea was that maybe it would be
  possible this way to run the generation multiple times in parallel
  instead of in sequence, but I did not get to implement that. To
  actually do that would require that the different generations could
  not interact in any way to avoid race-conditions. Ensuring this
  would take more time than I had available. Instead, the
  double-shares was generated in sequence, and to avoid
  race-conditions I waited with initiating the next generation until
  the player received a ``ready'' from all players, indicating they
  where now ready for the next step. 

  The protocol more or less only uses sub-protocols to do it's
  work. Other than that, it does some simple local caluclations. What
  is interesting is if $[a][b]=[c]$. While I do not test for it while
  running the actual protocol, I did extensive testing with the
  TestController class. This can be plugged in as an implementation of
  Controller, and worked as my test suite. Here, I reconstructed $[a],
  [b]$ and $[c]$ and checked if it was actually true that $ab=c$. It
  was a rather big step when this worked, since it was the first real
  time I had any proof that my program was starting to function as it
  should.
\end{protocol}

\begin{protocol}
  \texttt{PreparationPhase}$()$

  The preparation phase steps are taken care of by the Controller. It
  initiates the protocol, controls and determines when the players are
  ready for the next sub-protocol. This is done by using the
  pre-calculated variables for how many times the players should
  generate triples and the number of segments it should be split up
  into. Nothing exiting happens other than that. Note that the
  generation of triples are done sequentially, and not in parallel as
  the protocol states, as I did not get to implement it. Note,
  however, that this does not effect the communication cost of the
  protocol.

    \begin{protocol}
      \texttt{FaultDetection}$()$
      
      Detecting if any honest player is unhappy is quite simple when
      $t<\nicefrac{n}{3}$ and can assume that everyone sends the same
      value to all others. Every player just sends their happy-bit,
      and if you receive unhappy, you become unhappy. Then you repeat
      this, but this time take a majority vote which gives the result.

      \textit{Proof of correctness:} Assume an honest player is
      unhappy. He will send this to everyone, making all honest
      players become unhappy, and thus output unhappy at the majority
      protocol as $t<\nicefrac{n}{3}$. Now assume all honest players
      are happy at the beginning of \texttt{FaultDetection}. If a
      corrupt player sends unhappy, he will send it too everyone,
      making the honest players output unhappy at the majority
      protocol. Otherwise, everyone is happy and stays that way which
      makes the output be happy.
    \end{protocol}
    
    \begin{protocol}
      \texttt{FaultLocalization}$()$

      This is again a protocol which is simple in theory, but in
      practice becomes very complicated. In my implementation the
      lowest indexed player holds the title of judge, so there is no
      evening out of the work load. Before every player can send what
      they received and what random variables they chose, I needed a
      place to store it. This place became the MessageList which I
      found to be the easiest way of dealing with the problem of
      storing messages received. It has the weakness that it blindly
      stores messages, putting them at the end of the list for the
      given player who send the message, so if there is trouble with
      race conditions, it will fail. Apart from that though, it works
      excellent by having a counter that keeps track of which index in
      the different lists I have gotten to. You can also peek a given
      index to give the accused player a chance to check if they agree
      with the judge. Thus, to send the judge all relevant
      information, you merely send your MessageList which I made
      serializable.

      Now, the judge has to impersonate all players, checking if they
      send something they shouldn't. This proved to be a great
      problem, since debugging these messages send is rather
      complicated to figure out. Instead of copy/pasting code, I
      introduced a new variable to all the methods in the preparation
      phase. This variable was all the MessageLists gathered in a
      HashMap. If it was null, then we run in normal mode, but if not,
      then the judge is the one running it. Whenever the player the
      judge is currently impersonating ($P_i$) ought to send something
      to $P_j$, instead the judge gets the next message that $P_j$ got
      from $P_i$. This is checked up against the actual value the
      judge can calculate $P_i$ should have send. Here lies the
      debugging problem: Imagine if the messages does not match, but
      they should as we compare two honest players. All we have are
      numbers and no apparent way to check where in the code the
      problematic message originated from. Therefore one has to
      crosscheck the messages up until this point with the logs inside
      the MessageLists. This, I am sure, can be improved upon greatly.

      I should here also note that I cheat a little, since I know the
      corrupt players is the ones with the highest indexes, I start
      impersonating the highest index first. This is done since the
      race condition problems would prevent a correct result most of
      the time if I had to impersonate $P_1$ first, and go up from
      there. It is not actual cheating, just helping the judge out a
      little to avoid exceptions.

      When the judge has found a pair of players who disagree in what
      was received and what should have been send, he accuses them by
      sending the accused player's indexes to all $P\in\w{P}'$. The
      two players can then defend themselves. The accused player who
      claims to have received a wrong message can easily check if he
      agrees via his own MessageList. The judge tells an index and the
      message where there was a disagreement, and this is easily
      verifiable. The accused who sent something though, cannot defend
      himself in my implementation. In principle he should know what
      he sent himself, but to do this easily, I have to store every message
      sent which is not a problem, there was just not enough time, and
      the index of the message would not be the same for the sender
      and the receiver. Thus that is a future work problem. As of now,
      he just always agrees with the judge.
    \end{protocol}
    
    \begin{protocol}
      \texttt{PlayerElimination}$()$
      
      At first, this seemed to be one of the easiest steps, as I would
      just have every player remove the two disputing players ID from
      their respective list of player IDs, then call back to the
      Controller who would then synchronize the restart of the
      segment. However, doing so created a problem with BW amongst
      others. Since the indexes now did not start from 1, some methods
      produces non-correct output as a result. I never found the
      reason, so the solution was to fit all players in the range
      $1,\ldots, n'$. A more in depth explanation is found in section
      \ref{problems}. In short what happens is that if a Player has an
      ID that needs to change, he restarts his server at the correct
      port and adjusts his ID at himself and all other classes that he
      has responsibility for maintaining.
    \end{protocol}
\end{protocol}
\ \newline
\begin{protocol}
  \texttt{ComputationPhase}$()$
  
  Addition and constant gates are straightforward local computations,
  and input and output gates just require the use of the
  \texttt{ReconsPriv} protocol. An interesting detail for the input
  and output gates is that the Users handle most of the work. I first
  thought to just use the socket of $P_i$ for user $U_i$, but it might
  happen that $P_i$ gets eliminated which means his server is shut
  down. Therefore they had to have their own server at their own
  port. The User and Player is still connected since I keep a local
  Circuit for each pair of $U_i,P_i$ where $P_i$ holds the
  circuit. One could argue that the design choice should be reversed,
  such that each player has a user associated and that the user holds
  the circuit. This might enable players and users to share sockets
  and circuits with no immediate problems.
  
  The multiplication gates is not harder to implement than the
  others. The only difference is that they where harder to debug. I
  had a bug at one time when running with $n=5$ which caused the
  $T$'th generated triple to be incorrect. I only checked the first
  two triples at that point since I had tested with $n=4$ up until
  that point (which makes $T=n-2t=2$). Suddenly, the multiplication
  gates gave wrong outputs and there is a lot of computations where
  one might have made an error. The bug helped making me sure that
  everything worked, and that it was just the triple that failed. This
  only counted for the case where no players where yet
  eliminated. When this happens, a bug occurs in the multiplication
  code, causing errors in the output. More information is seen in
  section \ref{bugs}.
\end{protocol}

\subsection{Circuits}\label{circuits}
In order to check if everything worked as it should, I started out
with a small circuit consisting only of $n$ input gates, $n$ constant
gates (gate $i$ held the value $i$), 3 addition gates, 2
multiplication gates and $n$ output gates. If we denote the input of
user $i$ as $I_i$, then it evaluates the expression: 
\begin{eqnarray}
  R = (((I_1+I_2)+(I_3+I_4))\cdot 2)\cdot 2 \nonumber
\end{eqnarray}
and $R$ is then output to all users. For testing purposes the users
always input their index, which makes $R=40$. This is actually the
output of running my implementation when there is no player
elimination (with the appropriate modulo applied). Even if player
elimination occurs, it is only the multiplications that are faulty as
leaving them out outputs the correct result ($R=10$) to all users.

For doing endurance testing on how many multiplication gates my
implementation can run, and see if the theory holds, I also created a
circuit that creates as many multiplication gates as you like (given
as input to the program, denoted $c_M$) with a multiplicative depth of
1. This generates a circuit with $n$ input, constant and output gates
and $c_M$ multiplication gates that all just multiplies 2 by 3.

\subsection{Helper Frameworks}
\subsubsection{JAMA}
JAMA is a framework constructed to represent matrices in Java. It has
all the usual Matrix operations available\footnote{including functions
  such as Inverse, multiplication of matrices, transpose and
  determinant} and can do a number of different decompositions, one of
which can be used for solving a system of linear equations (via
Gaussian Elimination). This, however, cannot be done using modulo. It
is very easy and intuitive to use though, which is a great plus
compared to JScience.

\subsubsection{Jep}
Jep is an extensive framework that has all sorts of smart mathematical
classes that represents things such as polynomials and groups. It
allows the user to parse mathematical expressions in an easy way. The
only parts I use it for though, is the ability to compute using rings
as well as their representation of polynomials. A downside is that the
framework was not complete. There was for instance no support for
dividing polynomials, unless it is a constant polynomial. Thus I had
to implement this myself by doing long division. 

\subsubsection{JScience}
This framework states that it wants to be the leading framework for
the scientific community. It indeed has matrix classes that can handle
rings and Gaussian elimination which enabled me to compute a solution
to a linear system of equations. For some reason though, they have not
implemented polynomial division, which is the reason for me to use all
these different frameworks, as none of them is completely what I
needed. I could have implemented long division for this polynomial
class, but chose Jep's version since it was much simpler to
understand.

\subsection{Problems}\label{problems}
Here I will talk about the larger problems that arose with
implementing the theory of \cite{mpc1}.  

\paragraph{Hyper-invertible matrix.} The implementation of the matrix
was pretty straightforward, but once it was done in the first version
where doubles where used, I discovered that the entries of the matrix
contained decimal numbers due to loss of precision. I used doubles as
that was requires by JAMA if I where to use it's internal functions,
which made it a lot easier. Calculating with doubles along with
numbers from $\w{Z}_p$ gives results that are neither linear nor
correct. The result was that the doubleshares constructed via the
\texttt{DoubleShareRandom} protocol was not correct doubleshares
anymore. Thus, I had to make sure that the entries in the matrix was
numbers from $\w{Z}_p$. This was ensured by using the Jep framework
which has a class called $Zn$ which is essentially a ring that enables
you to compute everything mod $n$. Once the matrix was constructed,
and making sure that whatever vector you multiplied onto it was in the
correct field, there was no need for Jep anymore. Now the internal
JAMA functions could be utilized, which made it easier for me and
ensured that less errors where made.

\paragraph{Player elimination.} That there was great trouble
implementing player elimination is already discussed above, but I
would like to extend that discussion. Not only did I have problems
logging the messages and impersonating a player, extracting the
correct message from that players log, but I also had problems when I
finally found a dispute. The problem was foreseen, but that I could
not see the reason why it occured, was not. I store all received
shares in a HashMap for the sole reason of being able to handle
players eliminated with index different from the highest two. If I
just used a simple list, player $P_i$ would not be guaranteed to
receive the share belonging to index $i$. Even though I tried to take
care of the problem I knew would come eventually by using HashMaps, it
still failed. The solution to the problem came after some thought on
the issue. I knew that it worked when the indexes was ranged from
$1,\ldots, n'$ so why not force that to happen after a player
elimination. Each player therefore checks if his index should move
down, and if so shuts down his server. After this, the players who
moved restart their servers with the new index as the port. This
solution works with the exception of problems with race-conditions,
which sometimes appears and shuts down the protocol, and the
computations of the multiplication gate that fails to produce the
correct output.

\subsection{Known bugs}\label{bugs}
There are fortunately few bugs remaining that I know of, but those
that exist are not easily spotted or fixed. These bugs are noticed on
a MacBook Pro version 10.5.8, 2,16GHz dual core with 3GB
RAM. Surprisingly if run on a different machine with a quad core
2,6GHz with 8GB RAM, the socket exceptions mentioned later prevents
the program from finishing at very low numbers of both gates and
circuit size. Thus, it is not from missing memory that the problems
occur. However, it might have something to do with my rights on the
quad core, as it was run via an ssh connection to the university.

\paragraph{Race conditions} will sometimes occur after the player
elimination protocol has been run. It shows itself by some players
either starting the generation of triples before it ought to, or the
judge accusing more than one set of players at the same run of player
elimination -- oftentimes even accusing the same player twice which
indicate a bug in the checking of message logs.

\paragraph{Faulty results:} At a run of the entire protocol where a
player elimination has found place, the result of the multiplication
gates are faulty. Every other gate is correctly evaluated and the
triples are correctly generated (this is checked through extensive
testing). The reason for the failure lies, I think, with the way I
eliminate players. They change their ID, but there might be a problem
when a user and a player interacts. Users needs to use a triple for
their input-gate, and the triples are stored at the players. A user
keeps the same player throughout the protocol though, so if the
players change ID, I may have interfered somehow. Another place the
bug might be is with the multiplication gate calculations themselves,
where I forget to take eliminated players into account. 

\paragraph{Socket exceptions:} This problem occurs when $n$ is too
high and/or the circuit is too large. The issue presents itself around
$n=15$ with a small circuit of 2 multiplication gates, but it varies
when the socket exception is thrown. It means that I cannot possibly
test for $n>15$ unless I have access to multiple machines and altered
the program to handle such a transfer. Maybe there is a fix by
changing a local variable on the computer it is run on, but I cannot
find a solution in that area.

\paragraph{Memory leaks:} I have no proof of this, other than the
results of running the protocol which crashes with symptoms of memory
leaks. The problem shows itself when I reach a certain limit of memory
usage. It can happen in both preparation phase and computation
phase. For the computation phase, part of the problem lies with the
missing possibility to remove gates when they are no longer used by
other gates. In the preparation phase I can think of no good reason as
to where the problem lies, but a Socket exception will be thrown if
the circuit is too large, and thus requires the generation of too many
triples for my program to handle.
