\section{The Protocol}
We are now dressed to face the protocol and understand it. The main
protocol consists of a preparation phase and a computation phase, and
in the following, I will go through the sub-protocols needed in order
to obtain the desired properties for our secure multiparty computation
main protocol. For the readers convenience, I will keep the protocol
names similar to \cite{mpc1}, but the notation might differ.

The protocols described are either non-robust but detectable or
robust. The idea is to make some pre-computations prior to the actual
computation. These pre-computations are non-robust but detectable
which means that an adversary can delay the computation, but at the
cost of sacrificing a corrupted player each time.

Before we continue on, I present a few important definitions. I define
$T=n'-2t'=n-2t$. This fixed variable is going to be used frequently,
and plays a large role in many protocols and proofs. It is always true
that $n'-2t'=n-2t$ as we eliminate pairs of players where one is
guaranteed to be corrupt. Thus $n'$ decreases with 2 and $t'$ with 1
with each elimination.

Later on, we want to (locally) apply linear functions to shares. This
is understood as $f([s^{(1)}],\ldots,
[s^{(n')}]_d)=([y^{(1)}]_d,\ldots, [y^{(n')}]_d)$ and is done by
having each player $P_i$ locally compute $f(s_i^{(1)},\ldots,
s_i^{(n')})=(y_i^{(1)},\ldots, y_i^{(n')})$. Note that by applying a
linear function to correct $d$-sharings we obtain correct
$d$-sharings. With that, let us continue on to the protocols. The
first is for sharing a secret $s$ to all players in $\w{P}'$, and is
used as sub-protocol later.

\begin{framed}
  \begin{centering}
    \textbf{Protocol} \texttt{Share}$(Dealer\ P_D \in (P\cup U), s, d)$ 
  \end{centering}
    
    \begin{enumerate}
    \item The dealer $P_D$ generates shares according to a polynomial
      $f$ of degree $d$ with the constraints that $f(0) = s$ and all
      coefficients $\alpha_i \in_R \w{F}$. $P_D$ then sends $f(i)$ to
      $P_i$ for $i=1,\ldots, n'$.
    \end{enumerate}
\end{framed}
\begin{lemma}
  \texttt{Share} communicates $O(n\kappa)$ bits.
\end{lemma}
\noindent \textit{Proof:} The stated communication cost is easily seen
as we send a single share to each player in $\w{P}'$. \qed \newline

The dealer can of course be corrupt, and the protocol does not protect
against that. The consistency of the sharings has to (and will) be
verified separately in another protocol.
\newline

The purpose of the next protocol is to reconstruct the secret $s$ from
the shares given as input. It is called \texttt{ReconsPriv} since it
reveals the secret only to the player who does the reconstruction.

\begin{framed}
  \begin{centering}
    \textbf{Protocol} \texttt{ReconsPriv}$(Receiver\ P_R \in (P\cup
    U), d, [s]_d)$
  \end{centering}
  
  \begin{enumerate}
  \item In order for $P_R$ to recover the secret, he is going to need
    shares from everyone, so every $P_i \in \w{P}'$ sends his share of
    $[s]$ to $P_R$.
  \item $P_R$ can then check if at least $d+t'+1$ of the shares agree
    -- that is if there exists a polynomial $f(\cdot)$ which has
    degree $d$ and at least $d+t'+1$ of the received shares lie on
    $f$. If this is the case, $P_R$ remains happy, and outputs the
    secret $s$. Otherwise, $P_R$ gets unhappy, and thus sets his
    happy-bit to 0.
  \end{enumerate}
\end{framed}
\begin{lemma}
  \texttt{ReconsPriv} communicates $O(n\kappa)$ bits.
\end{lemma}
\noindent \textit{Proof:} The stated communication is easily seen as
$P_R$ receives a single share from all players in $\w{P}'$. \qed
\newline

Deciding if a polynomial exists under the given conditions requires a
more in-depth explanation. This is the point where the Berlekamp-Welch
(BW) algorithm comes into play. The requirement that at least $d+t'+1$
shares need to lie on the same polynomial is dependent on what $d$
is. \texttt{ReconsPriv} is ever only called with $d=t$ or $d=2t'$. If
we assume that $d=2t'$ and $n'\leq d+2t'$, then the requirement says
that all shares must lie on the same polynomial, which eliminates the
possibility of error correction. This is because of the requirement
for BW, which needs $n'>d+2t'$. Thus the only option is to check,
using lagrange interpolation, if all shares is on the same polynomial
and this enables us to detect if someone send a corrupt
share. However, we can utilize BW if $d=t$ or if $d=2t'$ and
$n'>d+2t'$. In the scenario of $d=t$ the requirement for BW is
fulfilled as $n'>d+2t'=t+2t'$, and $t<n'-2t'=n-2t$ makes the
inequality true. In general we want to do error correction rather than
lagrange interpolation, because lagrange will make the player unhappy
if there is just one corrupted share present. This means that we have
to do player elimination later, which causes us to get a higher
communication cost. Using error correction, we can ignore corruption,
and continue as if everyone where honest. Therefore, as soon as the
requirement for BW is fulfilled we use it. Formally the protocol is,
based on the proofs above, \emph{robust} if $d=t$ or if $d=2t'$ and
$n'>d+2t'$ and \emph{detectable} if $d=2t'$ and $n\leq d+2t'$.
\newline

There are times in later protocols where we need to reconstruct
several shares publicly such that all players know the reconstructed
shares. This is the purpose of \texttt{ReconsPubl}. This protocol
reveals up to $T$ secrets to all players in $\w{P'}$ and does this by
utilizing \texttt{ReconsPriv} $\ n'$ times. The input for the protocol
need not be consistent shares, as the protocol is either detectable or
robust.

\begin{framed}
  \begin{centering}
    \textbf{Protocol} \texttt{ReconsPubl}$(d, [s_1]_d,\ldots,[s_T]_d)$
  \end{centering}
  
  \begin{enumerate}
  \item All players in $\w{P}'$ start by locally computing $[u_j]_d$:
    \begin{eqnarray}
      [u_j]_d = [s_1]_d + [s_2]_d\beta_j + [s_3]_d\beta_j^2 + \ldots +
      [s_T]_d\beta_j^{T-1} \ \ ,j=1,\ldots,n' \nonumber
    \end{eqnarray}
    where $\beta_j$ is actually just $j$ since we work in
    $\w{Z}_p$. Using $\beta_j$ as notation makes it easier to see that
    $u$ is a function of $j$ which we compute in the points
    $j=1,\ldots, n'$. Details follow.

  \item We now call \texttt{ReconsPriv}$(P_j, d, [u_j]_d)$ for each
    $P_j \in \w{P}'$, which enables $P_j$ to know $u_j$ or become
    unhappy. $P_j$ sends unhappy if he became unhappy and sends $u_j$
    otherwise.
    
  \item $\forall P_i \in \w{P}':$ If $P_i$ received at least $T+t'$
    $(T-1)$-consistent values in step 2, he can now do BW to correct
    for the $t$ possible errors and compute $s_1,\ldots,s_T$ from the
    resulting polynomial. This can be done since we can perceive $u_j$
    as lying on a polynomial of degree $T-1$. If he did not receive
    enough values, he gets unhappy.
  \end{enumerate}
\end{framed}
\begin{lemma}
  \texttt{ReconsPubl} communicates $O(n^2\kappa)$ bits.
\end{lemma}
\noindent \textit{Proof:} Since \texttt{ReconsPriv} communicates
$O(n\kappa)$ bits, and we call this $n'$ times, the stated
communication is verified. \qed \newline

Whether the protocol is robust or only detectable depends on
$d$. \texttt{ReconsPubl} is ever only called with one of two values of
$d$: $t$ or $2t'$. If $d=t$ or $d=2t'$ and $n'>d+2t'$, the protocol is
robust since \texttt{ReconsPriv} is robust, and we therefore get at
least $T+t'=n'-t'$ $(T-1)$-consistent shares, which enables $P_i$ to always
compute all $T$ $s$'s. On the other hand, if $d=2t'$ and $n'\leq d+2t'$,
\texttt{ReconsPriv} is only detectable, which means there is no
guarantee of getting a consistent value back from all honest
players. Thus, $P_i$ might not be able to compute the $s$'s, and $P_i$
gets unhappy if this happens.

In step 1. we calculate $n'$ values of $[u_j]$, but this can be viewed
as creating a polynomial $h(\beta_j)$ which after reconstruction is
defined as:
\begin{eqnarray}
  h(\beta_j):=\sum_{i=1}^{T}s_i\beta_j^{i-1} \ \ ,\text{where}\
  \deg (h)=T-1 \nonumber
\end{eqnarray}

This is used in the last step where we compute the $T$ $s$'s. Here, we
can use the reconstructed $u$'s as points on a polynomial and utilize BW
to retrieve the polynomial and compute $s_1,\ldots, s_T$. This is done
the following way: As we have $n'$ points which all lie on the same
polynomial of degree $T-1$ containing at most $t$ errors, we can
invoke BW to extract the polynomial $h$. Once we have $h$, we can
extract $s_1$ as the first coefficient. Then, we subtract $s_1$ from
the polynomial and divide by $\beta_j$ (recall that in our case this
is just dividing by $j$). Now we have $h'(\beta_j) = s_2 +
\sum_{i=3}^{T}s_i\beta_j^{i-2}$, which gives us $s_2$. This is done in
total $T-1$ times to extract all $T$ s's out.

\begin{myremark}
  The clever idea of expanding the $T$ sharings to $n'$ sharings using
  a linear error-correcting code comes from \cite{mpc2} where it is
  used for the same purpose. 
\end{myremark}

% Since I am working in a prime field, we can safely assume that
% $\beta_j$ is some integer. In particular we fix $\beta_j$ to be
% $j$. In order to manipulate the polynomial we get from using BW. the
% way we want, we should do computations on all the points we do
% lagrange interpolation over. Given that we just computed $s_i$, the
% following expression is what the new points should look like: for
% $j=1,\ldots,n': u'_j = \frac{u_j - s_i}{j}$. This subtracts $s_i$ from
% the polynomial and divides by $\beta_j$ as we wanted.

If we look closer at step 3, it is actually a really cryptic way of
saying the following: If I receive unhappy from anyone in step 2, I
become unhappy \textit{unless} we are doing the robust version of the
protocol in which case I know I can never receive unhappy from an
honest player, and can thus just ignore it. If I do not receive an
unhappy, I know that all honest players are still happy which means I
have $n'-t'$ $(T-1)$-consistent values and a total of $n'$
values. This means that I can do error correction to eliminate the
$t'$ errors and compute $s_1,\ldots, s_T$. This can be done since BW
requires $n'>d+2t'$ and in this case $d=T-1=n'-2t'-1$ which, inserted
in the requirement, makes $n'>n'-1$ and this is obviously always
true. Let us look at the sentence ``received at least $T+t'$
$(T-1)$-consistent values''. This translates to having received a
total of $n'$ values. Otherwise there is no way to make sure that
$T+t'$ of the values are consistent. Consider if an honest player
sends unhappy back, and a corrupt player sends a corrupt value. Then
we would not be able to do BW since the requirement no longer is
fulfilled, but since we received unhappy from someone, he becomes
unhappy and does not have to do BW. Note that the adversary can indeed
make one honest party output $s_1,\ldots,s_T$, while making another
output unhappy. This does not matter, though, as the segment is
restarted if just one honest player is unhappy.

One might ask why we do not use player elimination at this point
instead of error-correcting codes, but this is unfortunately not
possible since \texttt{ReconsPubl} is used in the computation phase as
well as in the preparation phase. Recall that we need to open
everything in order to find the corrupter. For the computation phase,
this would also mean opening the input of the users which breaks the
whole idea. It is not really a secure system if someone could find out
e.g. what I bid on an item at an auction just by sending a wrong
message. Also, being able to use error correction keeps the
communication cost low as explained earlier.
\newline

For the preparation phase we are going to need the following
sub-protocol called \texttt{DoubleShareRandom}. It does pretty much
what it says, namely generating two shares of the same random
secret. The point is that the degree of which the secret is shared
with varies (given as input to the protocol). This is going to be
useful later. First let us formally define what a double share is:

\begin{mydef}
  A double share $[s]_{d,d'}$ is defined as $s$ having been shared
  with both degree $d$ and $d'$. We say that the pair of shares held
  by each player $P_i \in \w{P}'$ is called his
  \textnormal{double-share} and is denoted by $[s^{(i)}]_{d,d'}$.
\end{mydef}

The following protocol enables the players in $\w{P}'$ to create $T =
n'-2t'$ uniformly random $(d,d')$-sharings from $n'$ random
secrets. It works by first making all players $P_i \in \w{P}'$ choose
a random secret $s_i$ and sharing that two times in parallel via the
\texttt{Share} protocol, using $d$ and $d'$ as the degree. 

\begin{framed}
  \begin{centering}
    \textbf{Protocol} \texttt{DoubleShareRandom}$(d,d')$
  \end{centering}

  \begin{enumerate}
  \item $\forall P_i \in \w{P}':$ Chose a random secret $s_i \in_R
    \w{F}$ and call \texttt{Share} two times in parallel using first
    $d$ then $d'$ as the degree.
    
  \item $P_i$ now holds $([s_1^{(i)}]_{d,d'}, \ldots,
    [s_{n'}^{(i)}]_{d,d'})$. All players in $\w{P}'$ now locally
    applies the hyper-invertible matrix $M$ to $[s_1]_{d},\ldots ,
    [s_{n'}]_{d}$ and then $[s_1]_{d'},\ldots , [s_{n'}]_{d'}$
    resulting in $[r_1]_{d,d'},\ldots , [r_{n'}]_{d,d'}$. You apply a
    matrix by taking the linear combination as you would multiplying a
    vector to a matrix. Since we apply a linear combination, each
    $[r_j]$ is a correct $d$ respectively $d'$ sharing assuming each
    $[s_i]$ was.
    
  \item To ensure that each $[s_i]$ is indeed a correct $(d,d')$
    sharing, we get every $P_j \in \w{P}'$ to send his double-share of
    $[s_i]_{d,d'}$ to $P_i$ for $i=T+1,\ldots , n'$. $P_i$ can now
    check if for all $n'$ sharings it holds that there exists a degree
    $d$ polynomial $g(\cdot)$ where all $d$-shares lie on. This is
    also done for $d'$, and finally $P_i$ can check if $g(0) =
    g'(0)$. If any of the above checks fails, $P_i$ gets unhappy.
    
  \item The output of the protocol is the $T$ sharings
    $[r_1]_{d,d'},\ldots, [r_T]_{d,d'}$. If all honest players are
    still happy, they will be uniformly random
    $(d,d')$-consistent. Otherwise they are not consistent, but at
    least one honest player is unhappy.
  \end{enumerate}
\end{framed}
\begin{lemma}
  \texttt{DoubleShareRandom} communicates $O(n^2\kappa)$ bits.
\end{lemma}
\noindent \textit{Proof:} The expensive procedure is step 1 where each
player runs \texttt{Share} which amounts to $O(n^2\kappa)$ as
\texttt{Share} communicates $O(n\kappa)$. \qed

\paragraph{Correctness:} We get $n'-T = n'-(n'-2t') = 2t'$ parties to
check if the protocol is executed correctly. This ensures that at
least $t'$ of the $[r_i]$ is correctly shared assuming honest players
stayed happy. We also know that all honest players have done a correct
$(d,d')$ sharing. This amounts to $n'-t'$ correct sharings. Combining
this, we know that $(n'-t')+t' = n'$ of all the shares are
correct. This enables us to linearly compute the rest of the unknown
$([s_i], [r_i])$'s using the property of the Hyper-Invertible Matrix
seen in lemma \ref{lemmaHIM}. Now, since the output of a linear
combination of correct $(d,d')$-shares gives a correct
$(d,d')$-sharing we now know that all sharings are consistent. This is
the reason that we need to check that \textit{all} of the shares sent
to the checking players are correctly shared.

\paragraph{Privacy:} Let us first denote $s_H$ as the honest part of
$s$ with $|s_H|=T$ and $M^H$ as the columns of $M$ where $s_H$
hits. We want to show that the outputted $[r_1],\ldots, [r_T]$ are
uniformly random as this proves the adversary does not gain any
information from running the protocol. We will do this by showing that
there exists a bijection from $M^H\cdot s_H$ to $[r_1],\ldots,
[r_T]$. If $M^H \cdot s_H$ is uniformly random and a bijection exists,
then $[r_1],\ldots, [r_T]$ must also be uniformly random. Let us first
convince ourselves that $M^H\cdot s_H$ is uniformly random: We know
that $s_H$ is uniformly random since the honest players always input
uniformly random values. This makes $M^H\cdot s_H$ uniformly random
since all elements of $M$ by construction cannot be the 0
element. Now, we know that $|s_H|=T$ . Let us denote $A$ as the
indexes in $s_H$ and $B=Ø$. This gives us $|A|+|B|=T$. We can now
utilize lemma \ref{lemmaHIM} which says that we can use $M$ to map the
values $\{s_i\}_{i\in A}, \{r_j\}_{j\in B}$ onto $\{s_i\}_{i\not\in
  A}, \{r_j\}_{j\not\in B}$. Thus a bijection exists from $M^H\cdot
s_H$ to $[r_1],\ldots, [r_T]$ since $\{r_j\}_{j\not\in B}$ is exactly
the output of \texttt{DoubleShareRandom}. \newline

% We know that there are $t'$ corrupted players,
% who combined knows $2t'$ shares when considering the input and output
% sharings $s_i$ and $r_i$. If we fix these, we can examine the
% situation closer. We apply a vector $s$ to the hyper-invertible matrix
% $M$. Let us define the corrupted part of that vector $s_C$ and the
% honest part $s_H$. We do the same for the output vector $r_C$ and
% $r_H$. Let us also denote the part of $M$ where $s_C$ hits as $M_C$
% and the honest part as $M_H$. Now we can express the whole thing as $r
% = M\cdot s = M^H\cdot s_C+M^H\cdot s_H$. We know that the honest
% parties input uniform random values, so that makes $M^H\cdot s_H$
% uniformly random. Since $M$ is hyper-invertible, there exists a
% bijection from $M^H\cdot s_H$ to $M\cdot s$ (see lemma 1), which in
% turn makes $M\cdot s$ uniformly random. The reason we can use lemma 1
% is that we have $2n'-2t' > n'$ shares in total. 

\begin{myremark}
  There is a way to improve on the communication cost at this
  particular protocol, which is based on skipping to player
  elimination if a player becomes unhappy. The details are found at
  section \ref{improve}.
\end{myremark}
%\ \newline

To help evaluating the multiplication and input gates in the
computation phase, we need triples of the following form:
$([a]_t,[b]_t,[c]_t)$, where $c=ab$. If Random gates existed, they
would need some random input where the first component of a triple can
be used. Input gates also only uses the first component and
multiplication gates need the whole triple. To make the model simpler,
we just generate $c_I + c_M$ triples and only use the $[a]_t$
component in the case of an input gate. This can of course be done
faster and more efficient, but since it only increases the overhead of
communication, I will keep to the simple model.

It is not immediately obvious that it is possible to create such a
triple, since doing the operation $[a]_t \cdot [b]_t$ yields
$[c]_{2t}$. So how to reduce the $2t$ sharing to a $t$ sharing? - and
why is it a problem in the first place? Since it requires $2t+1$
shares to open a $2t$ sharing, and there is $t$ corrupted parties, we
do not actually have a problem the first time. But consider the case
where just two multiplication gates occur in a circuit where the
multiplicative depth is $2$. Here, we first create a $2t$ sharing,
then multiply a $t$ sharing and a $2t$ sharing. This yields a $3t$
sharing which cannot be reconstructed using error correction, and is
therefore not robust. It gets even worse if we increase the degree to
$4t$, since there is no way to reconstruct the secret anymore as there
is not enough shares available even if everyone was honest. Thus, we
need a way to bring the degree down from $2t$ to $t$. This is an
important part of the protocol \texttt{GenerateTriples} whose purpose
is to generate the triples needed in the computation phase.


\begin{framed}
  \begin{centering}
    \textbf{Protocol} \texttt{GenerateTriples}$()$
  \end{centering}

  \begin{enumerate}
  \item First the random shares $[a_1]_{t,t'},\ldots, [a_T]_{t,t'}$,
    $[b_1]_{t,t'},\ldots, [b_T]_{t,t'}$ and $[r_1]_{t,2t'},\ldots,
    [r_T]_{t,2t'}$ are generated by calling \texttt{DoubleShareRandom}
    three times in parallel. All a's, b's and r's are thus uniformly
    random and consistent double sharings.
    
  \item for $k=1,\ldots,T:$ The players now locally compute
    $[c_k]_{2t'} = [a_k]_{t'}[b_k]_{t'}$. Also locally, they can then
    compute $[d_k]_{2t'} = [c_k]_{2t'} - [r_k]_{2t'}$. This produces a
    $2t'$ sharing which does not reveal anything if reconstructed,
    since the random $2t'$-shared $r_k$ is subtracted.

  \item We now call \texttt{ReconsPubl}$(2t', [d_1]_{2t'},\ldots,
    [d_T]_{2t'})$ in order to reconstruct the $[d]$'s. Armed with the
    reconstructed values, the players can locally compute the final
    result as: $[c_k]_t = [r_k]_t + [d_k]_0$, where $[d_k]_0 =
    (d_k,\ldots,d_k)$ is a constant sharing of the reconstructed
    $d_k$.
    
  \item This yields $T$ triples of the format $([a_k]_t, [b_k]_t,
    [c_k]_t)$ which can now be output as a result of running the
    protocol.
  \end{enumerate}
\end{framed}
\begin{lemma}
  \texttt{GenerateTriples} communicates $O(n^2\kappa)$ bits. 
\end{lemma}
\noindent \textit{Proof:} The stated communication follows from the
communication cost proofs for \texttt{DoubleShareRandom} and
\texttt{ReconsPubl}. \qed 

\paragraph{Correctness:} We can see this works if we inspect the
protocol closely. Given $[c_k]_t = [r_k]_t + [d_k]_0$ and $[d_k]_{2t'}
= [c_k]_{2t'} - [r_k]_{2t'}$ we now imagine all shares being
reconstructed, which will give: $c = r - r + c$ which is indeed
true. Also, since the right hand $c$ was computed as $[c_k]_{2t'} =
[a_k]_{t'}[b_k]_{t'}$, we are thus making sure that $c=ab$ but without
revealing any vital piece of information.

\paragraph{Privacy:} I would like to argue that the adversary cannot
gain any more information than he could compute with his own
shares. The privacy of the generated double sharings follows from
\texttt{DoubleShareRandom}. Multiplying two uniformly random
$k$-shared shares yields a uniformly random $2k$-share, and the
computation $[d_k]_{2t'} = [c_k]_{2t'} - [r_k]_{2t'}$ only makes
$[d_k]_{2t'}$ uniformly random because $[r_k]_{2t'}$ is a $2t'$
sharing. If it was only $t'$-shared, we could not argue that the
computation would yield a random sharing. This proves privacy in the
first step. Because of this, we can reveal all $d$'s, and the final
computation is a linear combination which does not reveal anything you
did not know in advance.

\begin{myremark}
  Note that \texttt{ReconsPubl} might not return the values
  $d_1,\ldots, d_T$, but the player will instead be unhappy. If this
  is the case there is no output of \texttt{GenerateTriples}, which is
  not a problem since we know that at least one honest player is
  unhappy which will cause any triples generated at this segment to be
  thrown away.
\end{myremark}
\ \newline 

Now we have all the building blocks we need in order to describe the
preparation phase. This is a protocol run before the circuit is
evaluated, and it's purpose is to transform the detectable protocol
\texttt{GenerateTriples} into a robust protocol using player
elimination. In order to keep the communication cost low, we split the
preparation phase up into $t$ segments. Since the idea is to detect
corrupted players, it would be costly to restart the preparation phase
with one less corrupted player each time. Thus, we divide it into
segments where we detect errors at the end of each segment. This
effectively reduces the communication needed by a factor $t$. Since
the adversary can at most disrupt the preparation phase $t$ times, the
worst case would be if he corrupted the computations in all $t$
segments. If we denote the entire preparation phase's communication as
$G$, this would only cause us to restart $\nicefrac{G}{t}$ of the
communication each time instead of $G$. Thus, we effectively save a
factor $t$ in communication.

The $t$ segments will each have a length of $l=\lceil
\frac{c_M+c_I}{t}\rceil$ and are run in sequence, and because the
segments are all of equal size, the total cost for the resulting
robust protocol will be at most twice the cost of the detectable
version\footnote{Excluding the overhead of the player elimination
  framework.}, as the adversary can at most make us restart all
segments once. For each of the segments, we run
\texttt{GenerateTriples} $\lceil \frac{l}{T} \rceil$ times in parallel
since this will give us a speed increase without lowering security
compared to running in sequence.

\begin{framed}
  \begin{centering}
    \textbf{Protocol} \texttt{PreparationPhase}$()$
  \end{centering}

  \noindent For each segment $1,\ldots,t$ do:
  \begin{enumerate}
  \item Run \texttt{GenerateTriples} $\lceil \frac{l}{T} \rceil$ times
    in parallel.

    At the end of each segment, we need to know if any player in
    $\w{P}'$ is unhappy. This is the purpose of
    \texttt{FaultDetection}. Additionally all honest players should
    agree on the output no matter what the corrupted players send.

  \item
    \begin{centering} \textbf{Protocol}
      \texttt{FaultDetection}$()$
    \end{centering}
    
    \begin{enumerate}[label=\arabic{enumi}.\arabic{enumii}]
    \item Every $P_i \in \w{P}'$ sends his happy-bit to everyone
      else. If a player receives unhappy, he sets his happy-bit to
      unhappy.

    \item All players in $\w{P}$' run a Byzantine Agreement on their
      happy-bits. If the outcome is happy, we output the generated
      triples, but if the outcome is unhappy we need to find the
      culprit who made an error. For this purpose, we run
      \texttt{FaultLocalization}$()$.
    \end{enumerate}

  \item
    \begin{centering}
      \textbf{Protocol} \texttt{FaultLocalization}$()$
    \end{centering}

    \noindent The purpose of \texttt{FaultLocalization} is to find two
    players in $\w{P'}$ where we know for certain that at least one of
    those is corrupted. We call this dispute set of problematic
    players $\w{D}$.

    \begin{enumerate}[label=\arabic{enumi}.\arabic{enumii}]
    \item First we need a referee $P_r$. He is chosen as the player
      with the lowest index among the players is $\w{P'}$.
      
    \item Every $P_i \in \w{P}'$ sends everything they received
      (including \texttt{FaultDetection}) and all random values
      they have chosen in \texttt{GenerateTriples} to the referee
      $P_r$. 

    \item Now $P_r$ has all the information he needs in order to
      reconstruct the entire segment seen from all players
      perspective. This enables $P_r$ to accuse two players $P_i$ and
      $P_j$ where $P_i$ should have sent $x$ but $P_j$ insists that
      $x' \neq x$ was sent to him. $P_r$ then broadcasts $(l, i, j, x,
      x')$ where $l$ is the index of the problematic message.

    \item The accused players broadcasts whether they agree with $P_r$
      or not. If $P_i$ disagrees, everyone sets $\w{D} = {P_i, P_r}$. If
      $P_j$ disagrees, everyone sets $\w{D}= {P_j, P_r}$, but if they
      agree with $P_r$, we set $\w{D}={P_i, P_j}$. 
    \end{enumerate}
    
    \noindent Now that we have found $\w{D}$, we finally run the protocol
    \texttt{PlayerElimination}:

  \item    
    \begin{centering}
      \textbf{Protocol} \texttt{PlayerElimination}$()$
    \end{centering}

    \begin{enumerate}[label=\arabic{enumi}.\arabic{enumii}]
    \item All players in $\w{P}'$ sets $\w{P'} \leftarrow \w{P'}
      \backslash \w{D}$, $n' \leftarrow n'-2$, $t' \leftarrow
      t'-1$. Also, if we got to here, it was because a segment failed,
      and this segment is then restarted.
    \end{enumerate}
    
  \end{enumerate}
\end{framed}
\begin{lemma}\label{prep}
  \texttt{PreparationPhase} communicates $O((c_M+c_I)n\kappa
  +n^2\kappa+tBA(\kappa))=O((c_M+c_I)n\kappa + n^3\kappa)$ bits.
\end{lemma}
\noindent \textit{Proof:} Starting from behind, the $tBA(\kappa)$
comes from the fact that we run consensus (and broadcasting) $t$
times. This will in $O$-notation become $O(n^3\kappa)$ as $t$ depends
on $n$ and $BA$ can be done in $O(n^2)$. In \texttt{FaultLocalization}
all players sends everything they received to the referee, and this is
aslo done $t$ times. Still, in $O$-notation this only amounts to
$O(n^2\kappa)$. The last term comes from the generation of $\lceil
\frac{l}{T} \rceil$ triples ($O(n^2\kappa\lceil \frac{l}{T}
\rceil)$). Since $l=\frac{c_M+c_I}{t}$, and it is done $t$ times, we
end up with
$O(\frac{n^2\kappa(c_M+c_I)}{T})=O((c_M+c_I)n\kappa)$. \qed \newline

\begin{myremark}
  The idea of getting a referee to do the computations instead of
  having all players do them, and afterward be able to verify the
  result in a cheap fashion, comes originally from
  \cite{mpc3}.
\end{myremark}

\paragraph{Analysis of FaultDetection:} If any honest player in step 1
sends unhappy, then all honest players are guaranteed to input unhappy
to the Byzantine Agreement. By the rules of Byzantine Agreement, if
all honest players input unhappy the output will be unhappy. If all
honest players where happy to begin with we might still output
unhappy, but this is okay since we will then find a corrupt player and
eliminate him in the next steps.

\paragraph{Analysis of FaultLocalization:} Since we got to this point
in the first place, we know for certain that at least one honest
player became unhappy during the segment. An honest player can only
become unhappy if he received a corrupt value at some point (which
follows from the correctness proofs of the above protocols). Therefore
we know for certain that there is going to be a conflict somewhere
between what the players claim to have received and what they should
have received.

Step 3.4: The reasoning behind the choices for $\w{D}$ is that if either
player disagrees with the referee, then either the referee is lying
and is thus corrupt, or the disagreeing player is corrupt and would
thus want to question the referee. If both agree, then obviously one
of the two accused players are lying, since $x\not= x'$. This ensures
that at least one corrupt player is in the dispute set $\w{D}$. 
\newline

Note that the triples we output are \textit{always} $t$-shared, which
is the reason we can use them even if we eliminate players. It would
be a problem if they where $t'$-shared since $t'$ has a risk of
changing during the generations.

When we have our $c_M+c_I$ triples in place and have dealt with
corrupt players via player elimination, we can finally get to do some
actual computation on a circuit. This is the responsibility of the
computation phase. 

\begin{algorithm}
  \caption{Computation phase explained in pseudocode}
  \begin{algorithmic}
    \WHILE{There are more input gates} 
    \STATE evaluate input gates
    \ENDWHILE
    \WHILE{There are more constant gates} 
    \STATE evaluate constant gates
    \ENDWHILE
    \WHILE{There are more addition or multiplication gates} 

    \WHILE{more addition gates are ready}
    \STATE evaluate addition gates
    \ENDWHILE
    \WHILE{more multiplication gates are ready}
    \STATE evaluate multiplication gates
    \ENDWHILE

    \ENDWHILE
    \WHILE{There are more output gates}
    \STATE evaluate output gates
    \ENDWHILE
  \end{algorithmic}
\end{algorithm}

Algorithm 1 makes sure that you only evaluate gates that are ready to
be evaluated, and thus also ensures that multiplication depth is taken
into account. It first goes through all input gates, which involves
doing private reconstructions. Then loop through all constant gates,
assigning a constant share of the constant to the gate. Having done
this, we are ready to move on to a loop: In this, we first evaluate
all available addition gates which can be done locally. Then we can
loop again, and evaluate up to $\nicefrac{T}{2}$ multiplication gates,
assuming there are enough with the same multiplication depth, at the
same time. Once these are done, there might be more multiplication
gates at a lower multiplication depth that we can now evaluate. This
is the reason for looping. If not, we loop back to evaluating all
possible addition gates. This handles the whole circuit with the
exception of output gates. These are done when we have evaluated all
addition and multiplication gates and requires a private
reconstruction for each output gate.

It is only possible to evaluate $\nicefrac{T}{2}$ multiplication gates
at a time, and not $T$, because we only want to do a single call to
\texttt{ReconsPubl}. It can reconstruct up to $T$ shares, but the
multiplication gate takes as input the two shares we want to
multiply. In order to multiply the two shares, we need to reconstruct
two different shares for each multiplication gate. Therefore we are
only able to evaluate $\nicefrac{T}{2}$ multiplication gates at a
time.

On a more formal note I present the computation phase protocol which
describes the steps to take in order to evaluate the different types
of gates:
\begin{framed}
  \begin{centering}
    \textbf{Protocol} \texttt{ComputationPhase}$()$
  \end{centering}
  
  \begin{itemize}
  \item \texttt{Input Gate (User U inputs s):} Each input gate has a
    triple associated with it. The first random share $[a]_t$ of the
    triple is reconstructed using \texttt{ReconsPriv}$(U,t,[a]_t)$
    towards the user responsible for the input gate. Note that this is
    robust as $n'>t+2t'$.
    
    $U$ then calculates and broadcasts $d=s-a$. Then, every player $P_i
    \in \w{P}'$ computes his share of the input gate as $s_i = d+a_i$.

  \item \texttt{Constant Gate:} Each $P_i \in \w{P}'$ computes a
    constant-share $[c]_0$ from the constant $c$ associated to the
    constant gate.

  \item \texttt{Addition Gate:} This can be done locally and easily by
    having every player $P_i \in \w{P}'$ add the two associated
    shares.
  \item \texttt{Multiplication Gates:} In order to keep the
    communication cost low, we need to (and can) handle up to $\lfloor
    \nicefrac{T}{2} \rfloor$ multiplication gates at the same time. Let us name
    the factor shares of the $\nicefrac{T}{2}$ gates as $([x_1], [y_1]), \ldots,
    ([x_{T/2}],[y_{T/2}])$, and the associated triples as
    $([a_1],[b_1],[c_1]),\ldots,([a_{T/2}],[b_{T/2}],[c_{T/2}])$. We
    now want to compute the product $[z_i] =[x_i]\cdot [y_i]$, but
    this is going to increase the degree from $t$ to $2t$ if done
    straightforward. Therefore we do the following trick:
    \begin{enumerate}
    \item For $k=1,\ldots,\nicefrac{T}{2}$: let each $P_i \in \w{P}'$
      compute $[d_k]=[x_k]-[a_k]$, $[e_k]=[y_k]-[b_k]$. 
    \item We can now do a single public reconstruction in order to get
      $(d_1,e_1),\ldots,(d_{T/2},e_{T/2})$. Note that this is robust
      as $n'>t+2t'$.

    \item Now we can compute $[z_k]$ as follows: $[z_k]_t = [d_ke_k]_0
      + d_k[b_k]_t + e_k[a_k]_t + [c_k]_t$.
    \end{enumerate}
  \item \texttt{Output Gate (output [s] to user U):} Call
    \texttt{ReconsPriv}$(U,t,[s]_t)$ which reveals $s$ to only
    $U$. Note that this is robust as $n'>t+2t'$.
  \end{itemize}    
\end{framed}
\begin{lemma}\label{comp}
  \texttt{ComputationPhase} communicates
  $O((c_In+c_On+c_Mn+D_Mn^2)\kappa+c_IBA(\kappa))$ bits, where $D_M$
  is the multiplicative depth of the circuit.
\end{lemma}
\noindent \textit{Proof:} In the input-phase we reconstruct privately
and broadcasts for each $c_I$. In the output-phase we only reconstruct
privately for each $c_O$, and for multiplication we do a reconstruct
public for every $\nicefrac{T}{2}$ $c_M$ which amounts to
$O(c_Mn\kappa)$. However, we cannot argue that way if the
multiplicative depth is higher than one. Here we have to do a
reconstruction for each depth level which costs $O(D_Mn^2\kappa)$ bits
as \texttt{ReconsPubl} costs $O(n^2\kappa)$. Thus all terms are
accounted for. \qed

\paragraph{Correctness:} As all sub-protocols used are robust and the
triples generated are correct as of the correctness proof of
\texttt{GenerateTriples}, it follows that the output of any gate is
correct, and specifically the output gates must be
correct. \texttt{ReconsPriv} and \texttt{ReconsPubl} are robust since
we can always utilize Berlekamp-Welch. Recall that if either one of
the two are called with degree $t$, they will be robust.

\paragraph{Privacy:} As all sub-protocols used are robust, they are
guaranteed to not leak any information to the adversary, and as the
shares are all $t$-shared, the adversary cannot gain information from
any shares during the multiplication gate computations. Thus, it
follows that the adversary learns nothing but $\{x_i,y_i\}_{i\in
  \w{C}}$ where $x_i,y_i$ is the input and output of user $i$. The
adversary is entitled to this information as it is the purpose of the
protocol to provide user $i$ with output $y_i$. \newline

Note that the addition gate can easily be extended to any linear
function taking any number of inputs, since this also only would
require local computations.

If we look at the last step of the multiplication gate, we can expand
the expression in order to see that it indeed holds. I will omit the
indexes, brackets and sharing degrees:
\begin{eqnarray}
  z &=& de+db+ea+c \nonumber \\
  &=& (x-a)(y-b) + (x-a)b +  (y-b)a + ab \nonumber \\
  &=& xy-xb-ya+ab + xb-ab+ya-ab+ab \nonumber \\
  &=& xy \nonumber
\end{eqnarray}

\begin{theorem}
  The total combined communication cost for \texttt{PreparationPhase}
  and \texttt{ComputationPhase} is $O((c_In^2+c_Mn+c_On+D_Mn^2)\kappa
  + n^3\kappa)$. The protocol \texttt{ComputationPhase} is perfectly
  secure against an active adversary corrupting up to
  $t<\nicefrac{n}{3}$ players.
\end{theorem}
\noindent \textit{Proof:} Since $BA(\kappa)=n^2\kappa$ this is what
causes the quadratic cost for input gates. The remaining terms follows
from lemma \ref{prep} and \ref{comp}. Perfect security follows from
the proofs of correctness and privacy of
\texttt{ComputationPhase}. \qed

\subsection{Improving the overhead of the communication and speed
  cost} \label{improve} This section devotes itself to improving on
the protocols overhead communication and speed cost. In particular, we
are going to take a closer look on \texttt{DoubleShareRandom}. Recall
that we merely mark ourselves as unhappy if the check fails. Instead
of just noticing that something went wrong, we can skip every
computation and communication which was supposed to happen afterward
and go directly to the Fault localization step in the
\texttt{PreparationPhase} protocol since we know that at least one
player is unhappy. This requires a slight change in the protocol
though, namely that everyone waits for the checks to be done, and the
checking players will then send ``happy'' or ``unhappy''. ``Happy''
would indicate that the protocol can continue, whereas ``unhappy''
will trigger player elimination. This is the same as saying that the
players should here initiate \texttt{FaultDetection}, but with step 1
only being done by the checking players. If the output is happy, we
continue on as usual. Otherwise we treat this point as the end of the
segment, and thus go to \texttt{FaultLocalization}.

The extension to the protocol has a communication cost of $O(n^2)$ as
this is the cost for running a consensus protocol for agreeing on a
1-bit message (See section \ref{broadcasting}). Since
\texttt{DoubleShareRandom} already had a communication cost of at
least $O(n^2)$, the extension will not increase the cost. Note that if
an error is detected we skip the rest of the \texttt{GenerateTriples}
protocol which amounts to at least the cost of \texttt{ReconsPubl} as
well as any parallel running triple generations. 

The problem is that there is no good reason for the optimal adversary
to introduce corruption at this point since he might as well wait,
thus causing this to only enhance the overhead of the protocols
communication cost. However, if we are in a situation where the
adversary is not always doing what is best for him, e.g because of
random errors due to bad equipment, then this optimization is
helpful. Thus, the conclusion is that one should think about the
situation the system is to be used in and decide from that if this
extension should be included or not. \newline

One would think that the same technique could be used everywhere a
player becomes unhappy. This is indeed true, but the only other place
that happens is at the reconstruction protocols, and the last
communication that happens in a generation of triples is
\texttt{ReconsPubl}. Note here that it of course only applies to the
detectable versions of the protocols (i.e. $d=2t'$). Since the triples
are generated in parallel, it would not make any sense using the above
technique as it would not improve anything. If it for some reason was
only possible to generate triples in sequence though, it would be a
significant improvement since we know we could use the triples already
generated. This would also eliminate the need for segments.
