\section{Preliminaries}
I here intent to give all the information and definitions needed in
order to understand the protocol. I will first go through the set-up
of the world we model in and what security means. This is followed by
the basic tools we need which includes Lagrange interpolation,
Berlekamp-Welch, broadcasting, the Hyper-Invertible Matrix invented by
\cite{mpc1} and finally the player elimination framework.

\subsection{Set-up}
I use the following model for the rest of the paper: We start out with
a total set of players $\w{P} = \{P_1,\ldots, P_n\}$ and a set of
users $\w{U} =\{U_1,\ldots, U_n\}$. The users give input and receive
output, while the players do all the computations. There are thus $n$
players where $t<\nicefrac{n}{3}$ of them can be controlled by a
central adversary that is computationally unbounded, active, adaptive,
and rushing.  We also define $n'$ as the number of players in
$\w{P}'$, $t'$ as the number of corrupted parties in $\w{P}'$ and
$\w{P}'=\{P_1,\ldots,P_{n'}\}$ as the current set of players. All this
is covered in detail later and is a result of using player
elimination.

Since we want to evaluate some function, we need a finite field to do
our calculations in. We call this finite field $\w{F}$, and I use
$\w{F}=\w{Z}_p$ in this project where $p$ is the first prime larger
than $2n$. The need for the field size being larger than $2n$ comes
from the use of Hyper-Invertible Matrices which is covered later. In
order to evaluate the function, we need a circuit. This can be
arbitrarily big, but consists only of $5$ different gates: input
gates, constant gates, addition gates, multiplication gates and output
gates. These are denoted $c_I, c_C, c_A, c_M, c_O$ respectively. Gates
are the simple way of evaluating a function since we just need
addition and multiplication gates to evaluate any function. Input
gates represents the users input and output gates the users
output. Constant gates are there to make computations easier. In
\cite{mpc1} they use random gates instead of constant gates which
returns a random value, but since this does not change the amount of
functions I can solve, constant gates was a more simple
approach. Utilizing random gates would be quite simple though, as you
would just need to generate $c_R$ (i.e. the number of random gates in
the circuit) random values and assign a random value to each of the
gates.

Let us also denote $\kappa$ as the bit-length of a field element.

I previously mentioned that SMC protocols are required to satisfy the
two conditions correctness and privacy, but we need to specify the
notion of security used in this thesis as \cite{mpc1} uses the notion
of perfect security. We therefore say that a protocol is
\textit{perfectly secure} if it satisfies the following two
conditions:

\begin{itemize}
\item Perfect correctness: With probability 1, all parties receive
  correct outputs based on the inputs.
\item Perfect privacy: The adversary cannot learn any more information
  than $\{x_i,y_i\}_{i\in \w{C}}$ for $|\w{C}|<\nicefrac{n}{3}$ even if given
  unlimited time and computing power.
\end{itemize}

\subsection{Basic tools}
In order to understand the more advanced algorithms, we need some
basic tools. These will be mentioned and explained in detail here.

\paragraph{Secret sharing} is the technique of sharing a value with an
arbitrary number of players without them knowing the secret you
shared. Each player receives a share of your secret, which can only be
reproduced if enough shares are gathered in one place. Many schemes
exists that does exactly that, but I will here use the scheme called
Shamirs Secret Sharing Scheme \cite{SS}. The idea is to take a
polynomial $f(\cdot)$ which has a high enough degree $d$ such that the
adversary cannot get hold of enough shares to reconstruct the
polynomial. Given at least $d+1$ shares of such a polynomial yields
the secret through Lagrange interpolation. Concretely, you choose
$\alpha_1,\ldots, \alpha_d \in_R \w{Z}_p$, where we define $\in_R
\w{Z}_P$ to mean a uniformly random element in $\w{Z}_p$, and define
$f(x) = s + \alpha_1x+\alpha_2x^2+\ldots+\alpha_dx^t$, where $s$ is
the secret. In other words, $f(0)$ is going to yield you the secret
which can only be obtained if you know the true polynomial. If you do
not have enough shares to reconstruct $f$, all possible guesses of
$f(0)$ will be equally likely. This is because there will exist at
least one free variable $\alpha_i$, and $f(0)$ could therefore be any
possible value.

The shares are distributed as: $s_i = f(i)\mod p\ $ for
$i=1,\ldots,n$.

\begin{mydef}
  A $d$-sharing of a secret $s$ is denoted as
  $[s]_d=(s_1,\ldots,s_{n'})$, if there exists a polynomial $f(\cdot)$
  of degree $d$ where $\forall i: f(i)=s_i$. Throughout this paper
  $[s]_d$ means that every player $P_i \in \w{P}'$ holds his
  $d$-sharing $s_i$ of $s$.
\end{mydef}

\begin{mydef}
  A set of $l$ points are $k$-consistent if there exists a polynomial
  $f$ of degree $k$ such that all $l$ points lie on $f$.
\end{mydef}

\paragraph{Lagrange interpolation} can be used to reconstruct a
polynomial $f$ given at least $\deg(f)+1=k$ shares. We will for the
next part assume $\deg(f)=t$. Given the points:
\begin{eqnarray}
  (x_0, f(x_0)), \ldots, (x_t, f(x_t)) \nonumber
\end{eqnarray}
We can reconstruct the polynomial by the following formula: 
\begin{eqnarray}
  L(x) &:=& \sum_{j=0}^{t}f(x_j)l_j(x)\text{  , where} \nonumber \\
  l_j(x) &:=& \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq
      j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m} \nonumber
\end{eqnarray}

For this project though, it is not necessary to compute the whole
polynomial, so to ease the programming part, I settled with being able
to input the point I'm interested in (such as wanting to know $f(2)$
or $f(0)$, the secret).

Assume the shares are stored in a hash table $H$ which maps integers
to shares, and that the integers are all distinct. The integers are
the index of the players in $\w{P}'$, which we can use since our field
consists of the integers from $0$ to $|F|-1$, also known as
$\w{Z}_p$. The notation for the next part is going to be: $j\in H$
denotes the integers in $H$. $H(j)$ is the share mapped from $j$. Now we
can write the Lagrange interpolation as:
\begin{eqnarray}
  L(x) &:=& \sum_{j\in H}H(j)l_j()\text{  , where} \nonumber \\
  l_j(x) &:=& \prod_{\begin{smallmatrix}m\in H\\ m\neq
      j\end{smallmatrix}} \frac{x-m}{j-m} \nonumber
\end{eqnarray}

This is going to yield us $f(x)$ \textit{if} we remember to limit the
number of entries in $H$ to $\deg(f)+1$. Thus, to check if all $k$
shares lie on the same polynomial, we use $O(k^3)$ time since Lagrange
interpolation itself takes $O(k^2)$ as $|H|=O(k)$ and we utilize
Lagrange interpolation $k$ times.

\subsubsection{Berlekamp-Welch} \label{BW} Assume you are given a
degree $d$ and $n>d$ points where only $n-t$ of those points are
$d$-consistent. Then the goal is to extract a polynomial $f(X)$ of
degree $d$ where all $n-t$ points lie on, while correcting for the $t$
errors (which do not lie on $f(X)$). In other words we look for the
polynomial that all the shares would have been computed from, if no
errors where present. A practical situation where this could occur is
if a player $P_i$ is send a share from every other player, but $t$ of
those players send $P_i$ a corrupted share. Then $P_i$ would like to
correct for those errors and still get the unique and correct
polynomial. This is where Berlekamp-Welch (BW) comes in handy. It is
not all magic though, as a requirement for BW is that $n>d+2t$ as will
be apparent later.

We first define $\w{F}[X]_{\leq d}$ as the polynomials of degree at
most $d$ and $\w{F}[X]_{d}$ as the polynomials of degree exactly $d$. 

Let
\begin{eqnarray}
  (x_1,\ldots, x_n) \in \w{F} \nonumber 
\end{eqnarray}
be $n$ distinct non-zero elements and $f\in \w{F}_{\leq d}$. Let
\begin{eqnarray}
  (y_1,\ldots, y_n) = (f(x_1), \ldots, f(x_n)) \nonumber
\end{eqnarray}
be a sharing of $f(0)$, and define 
\begin{eqnarray}
  [\tilde{y}]_t = (\tilde{y}_1,\ldots, \tilde{y}_n) = (y_1,\ldots,
  y_n)+(e_1, \ldots, e_n) \nonumber
\end{eqnarray}

The sharing $[\tilde{y}]_t$ is a $t$-sharing of $f(0)$ with added
errors. Let us also define $\w{I_=}$ to be the set where $e_i =
0$. When no errors are added, this is the same as saying that the
share comes from an honest player. This also means that $|\w{I_=}| >
2t$. Additionally we define $\w{I_{\neq}}$ as the set where $e_i
\neq 0$ which is the shares from the corrupted parties. This
obviously gives us that $|\w{I_{\neq}}| \leq t$.

To be able to compute $f(X)$ from $(\tilde{y}_1,\ldots, \tilde{y}_n)$,
we need to assume $n>d+2t$ and that $\w{I}_{\neq}\neq Ø$. If
$\w{I}_{\neq}= Ø$ there would be no errors, and we could therefore
just use Lagrange interpolation to extract the polynomial $f(X)$. 

Before explaining how to do the actual algorithm, we will first look
at the opposite problem: Given $f(X)$ we want to find $E(X)\in \w{F}[X]_t$ and
$G(X) \in \w{F}[X]_{d+t}$ such that $G(x_i)=E(x_i)\tilde{y}_i \ \forall i \in \w{I}$. We
call such a pair $(G(X),E(X))$ a \emph{solution}.

\paragraph{Proving a solution always exists}\ \newline

First step is to show that a solution $(G'(X), E'(X))$ can always be
computed given $f(X)$. We let
\begin{eqnarray}
  E'(X)=\prod_{i\in \w{I}_{\neq}}(X-x_i) \label{E(X)}
\end{eqnarray}
This construction ensures that $E'(X) \in \w{F}[X]_t$ and that $E'(X)$
is non-zero since $|\w{I}_{\neq}| \leq t$ and $\w{I}_{\neq}\neq Ø$. Note
that $E'(x_i)=0$ for all $i\in \w{I}_{\neq}$ because we at some point is
going to multiply $(x_i-x_i)=0$ with the rest of the factors.

Now, let: 
\begin{eqnarray}
  G'(X) = E'(X)f(X) \label{G(X)}
\end{eqnarray}
Given the degree of $E'(X)$ and $f(X)$ it follows that $G'(X) \in
\w{F}_{\leq d+t}$. We can see that $(G'(X), E'(X))$ is a solution by
looking at what happens when first $i\in \w{I_{\neq}}$:
\begin{eqnarray}
  G'(x_i) = E'(x_i)f(x_i) = 0f(x_i)=0=0\tilde{y}_i=E'(x_i)\tilde{y}_i \nonumber
\end{eqnarray}
and then if $i\in \w{I_=}$:
\begin{eqnarray}
  G'(x_i) = E'(x_i)f(x_i) = E'(x_i)\tilde{y}_i \nonumber
\end{eqnarray}

\paragraph{Computing $f(X)$ from a solution}\ \newline

Given a solution $(G(X), E(X)$ we now want to compute $f(X)$. Above we
showed that $G'(X)=E'(X)f(X)$ was one solution such that
$f(X)=G'(X)/E'(X)$. This is always possible since $E'(X)$ is
non-zero. This we can actually do for any possible solution. The claim
I want to show is therefore that:
\begin{eqnarray}
  f(X) = G(X)/E(X) \nonumber
\end{eqnarray}

If we let $E'(X)$ and $G'(X)$ be constructed as by (\ref{E(X)}) and
(\ref{G(X)}), and define two new polynomials as
\begin{eqnarray}
  L(X) = G'(X)E(X) \nonumber \\
  R(X) = G(X)E'(X) \nonumber
\end{eqnarray}
then it holds for all $i\in \w{I}$:
\begin{eqnarray}
  L(x_i) = G'(x_i)E(x_i) = (E'(x_i)\tilde{y}_i)E(x_i) =
  E'(x_i)(\tilde{y}_iE(x_i)) = E'(x_i)G(x_i) = R(x_i) \label{LR}
\end{eqnarray}

This alone is not enough to prove that $L(X)=R(X)$, but since we have
$L(X),R(X) \in \w{F}[X]_{d+2t}$ combined with (\ref{LR}) for $n>d+2t$
points $x_i$, it must be that $L(X)=R(X)$. If (\ref{LR}) was only
guaranteed true for e.g. $d$ points, there would be no guarantee that
the polynomials would be the same in the remaining $2t$ points. This
is why we require $n>d+2t$ for Berlekamp-Welch to work. 

This means that 
\begin{eqnarray}
  G'(X)E(X) &=& G(X)E'(X) \nonumber\\
  \Updownarrow &&\nonumber\\
  \frac{G(X)}{E(X)} &=& \frac{G'(X)}{E'(X)} = f(X) \nonumber
\end{eqnarray}

which proves that given a solution you can always compute $f(X)$. 

\paragraph{Computing a solution}\ \newline

Now that we know a solution exists no matter what and that this
enables us to efficiently compute $f(X)$, we will focus on computing
$E(X),G(X)$ using linear algebra. First we express $E(X)$ and $G(X)$
as:
\begin{eqnarray}
  E(X)=X^t + \sum_{j=0}^{t-1}A_jX^j \nonumber \\
  G(X) = \sum_{j=0}^{2t}B_jX^j\nonumber 
\end{eqnarray}

If we now insert these equations into $G(x_i)=E(x_i)\tilde{y}_i$, we
get:

\begin{eqnarray}
  \sum_{j=0}^{2t}B_jx_i^j = \tilde{y}_i(x_i^t +
  \sum_{j=0}^{t-1}A_jx_i^j) ,\quad i=1,\ldots,n \nonumber 
\end{eqnarray}
which, if we let $b_{i,j} = x_i^j$ and $a_{i,j} = \tilde{y}_ix_i^j$,
can be written as:

\begin{eqnarray}
  \sum_{j=0}^{2t}b_{i,j}B_j = a_{i,t} +
  \sum_{j=0}^{t-1}a_{i,j}A_j ,\quad i=1,\ldots,n \nonumber 
\end{eqnarray}
This gives us $n$ linear equations with at most $n$ unknowns since we
have $2t+1$ unknown in $B_j$ and the remaining $t$ unknown in
$A_j$. Such a linear system is always efficiently solvable by Gaussian
elimination. Note that in order to get a singular matrix, and thereby
a unique solution, from the linear system, we need $n$ to be no larger
than $n=3t+1$ and the set $\w{I_{\not=}}$ has to have a size of
exactly $t$. Otherwise there will be at least one free variable which
is not a problem, but is something to think about when implementing
it.

The output of running BW on $n$ points with $t$ errors is going to be
the unique polynomial $f(\cdot)$ of degree $d$ that all $n$ points lie
on when the $t$ errors are removed.

\subsubsection{Broadcasting} \label{broadcasting} Being able to
broadcast is an essential part of the protocol. I have, however, not
done an actual implementation of broadcasting, since it would require
another master's thesis to do so properly. The theory is not that
difficult though, so that I will explain:

Broadcasting is a protocol enabling a party $P_B$, the broadcaster, to
send his input $x$ to all other parties $P_1, \ldots, P_n$ who then
outputs $y_i$. We require three things of the protocol:
\begin{itemize}
\item \textit{Termination:} The protocol must always terminate.
\item \textit{Validity:} If $P_B$ is honest, then $x=y_i$ for all honest
$P_i$.
\item \textit{Agreement:} Even assuming $P_B$ is corrupt, it has to be
true that $y_i=y_j$ for all honest parties $P_i,P_j$. 
\end{itemize}

Broadcasting is closely linked to the Byzantine Agreement
problem\footnote{covered in detail later.} which is
derived from the byzantine generals problem described in
\cite{byzantine}. This is because there is a one-to-one translation
from Dolev-Strong broadcasting \cite{Dolev-Strong} and the byzantine
agreement problem as presented in \cite{broadcast}, chapter 7.

\paragraph{Dolev-Strong} uses rounds where each round lasts a limited
time in order to secure termination. To enable this, synchronized
timers are deployed for each party before the protocol execution. Each
round is split into two sub-rounds: receive and send. Figure \reff{DS}
explains it in a graphical setting. We will assume a public-key crypto
system is in place. You begin the protocol by having the broadcaster
$P_B$ sign the message he wants to broadcast. He then sends the signed
message to all other parties. When a party is receiving in round $i$,
you wait for messages send in round $i-1$, and you only process
signature-chains of length $i$. A signature chain is recursive in its
definition as it is a signature of a signature chain, where the core
consists of the message the broadcaster originally signed. The
messages received are then checked for validity (i.e. the message $M$
contains the same content $v$ as those seen before and has $i$
different signatures attached). If everything went well, you attach
your own signature to the signature chain and sends it to everyone. If
there was a dispute in the contents received, you attach your
signature to both chains and sends them to all. If this happens, you
also output a default value, stating that the broadcast failed. This
also happens if you never received a valid message. When nothing goes
wrong, you can safely output the content you received which now has
signatures from all.

\fig{0.4}{Dolev-Strong.png}{DS}{The
  Dolev-Strong protocol}

Note that if an honest party starts the broadcast, it will terminate
in round 2 as everyone will receive a valid chain and output $v$, but
if a malicious party begins broadcasting it will terminate after a
maximum of $t+1$ rounds, where $t$ is the number of corrupted
parties. This is due to the fact that the adversary can only send
corrupt chains in $t$ rounds (as a chain is discarded if the same
signature appears twice). The protocol therefore terminates if
$t<n$. Setting $t<\nicefrac{n}{2}$ enables us to go from broadcasting
to the Byzantine Agreement as the protocol in the next section shows:

\paragraph{Byzantine Agreement} is the problem of getting $n$ players
to agree on an output. Byzantine Agreement is also known as
\textit{Consensus}. One can split the problem into \textit{Agreement}
and \textit{Validity}. \textit{Agreement} denotes that if two honest
parties output $w_i$, $w_j$ respectively, then $w_i =
w_j$. \textit{Validity} denotes that if all honest parties give the
same input, then it cannot happen that they output a different value
than the input.

If the parties are synchronized and can sign messages, we can obtain a
bound of $t<\nicefrac{n}{2}$ \cite{broadcast}. 

\begin{framed} \label{BA}
  \begin{centering}
    \textbf{Protocol} \texttt{ByzantineAgreement}$()$
  \end{centering}
  
  \begin{enumerate}
    \item Every $P_i \in \w{P}'$ broadcasts his vote.
    \item The players in $\w{P}'$ then outputs the value chosen by the
      majority of the votes. 
  \end{enumerate}
\end{framed}
\begin{lemma}
  \texttt{ByzantineAgreement} satisfies both the Agreement and
  Validity requirements for a byzantine agreement protocol.
\end{lemma}
\noindent \textit{Proof:} As we assume $t<\nicefrac{n}{2}$, we know
the majority of the players are honest. This ensures that if all
honest players input the same value, they all output that exact value
as they perform a majority vote. Thus both Agreement and Validity are
satisfied. \qed \newline

Both broadcasting a message, agreeing on a message and obtaining
validity can be simulated by communicating $BA(\kappa)=O(n^2\kappa)$
bits.

\subsubsection{Hyper-Invertible Matrices}
\begin{mydef}
  An r-by-c matrix $M$ is \textit{hyper-invertible} if for any
  combination of index sets $R\subseteq\{1,\ldots,r\}$ and
  $C\subseteq\{1,\ldots,c\}$ with $|R|=|C| > 0$, the matrix $M_R^C =
  (M_R)^C$ is invertible. $M_R$ denotes the matrix consisting of the
  rows indexed by $R$, and $M^C$ denotes the matrix consisting of the
  columns indexed by $C$
\end{mydef}

The hyper-invertible matrix by \cite{mpc1} can be used to generate
$\Omega(n)$ uniformly random sharings with the low cost of
$O(n^2)$. The generation is detectable, which means that you can check
if something went wrong.

Such a hyper-invertible n-by-n matrix over a finite field $\w{F}$ with
$|\w{F}| \geq 2n$ can be constructed as follows: Let $\alpha_1,
\ldots, \alpha_n, \beta_1, \ldots, \beta_n$ be fixed distinct elements
in $\w{F}$. Then the matrix of the form
$M={\lambda_{i,j}}_{i=1,\ldots,n}^{j=1,\ldots,n}$, where
$\lambda_{i,j} = \prod_{\begin{smallmatrix} k=1\\ k \neq j
  \end{smallmatrix}}^n \frac{\beta_i - \alpha_k}{\alpha_j -
  \alpha_i}$, is a hyper-invertible matrix. $M$ is actually a function
in disguise since it can be considered as a function mapping
$(x_1,\ldots,x_n)$ to $(y_1,\ldots,y_n)$ such that
$(\beta_1,y_1),\ldots,(\beta_n,y_n)$ lies on the polynomial $g[\cdot]$
defined by the points $(\alpha_1,x_1),\ldots,(\alpha_n,x_n)$, and has
degree $n-1$.
\begin{lemma}
  The above construction generates a hyper-invertible matrix.
\end{lemma}
\noindent \textit{Proof:} We want to prove that for any index sets
$R,C\subseteq\{1,\ldots,n\}$ with $|R|=|C|>0$, $M_R^C$ is
invertible. Since $|R|=|C|$ we know that if $M_R^C$ is surjective it
is the same as $M_R^C$ being invertible, and being surjective means
that for every $\vec{y}_R$ there exists an $\vec{x}_C$ such that
$\vec{y}_R=M_R^C\vec{x}_C$. Let us first prove this statement: Let $A$
be the set we map from, and $B$ the set we map to. Being Invertible
means that each point in $A$ is mapped to $B$ and vice
verse. Surjective means that each point in $A$ is mapped to $B$, but
since $|R|=|C|$, the sets $A$ and $B$ is also the same size. Thus, no
two points in $A$ hit the same point in $B$ and there are no points we
have not hit in $B$ which means that, in this case, being surjective
is equivalent to being invertible. To make it easier to show that
$M_R^C$ is surjective, we will look at the equivalent problem of
showing that for every $\vec{y}_R$ there exists an $\vec{x}$ such that
$\vec{y}_R=M_R\vec{x}$ and $\vec{x}_{\overline{C}}=\vec{0}$, where
$\overline{C}=\{1,\ldots,n\}\backslash C$. If we set
$\vec{x}_{\overline{C}}=\vec{0}$ by just expanding $\vec{x}_C$ by $0$
in all points not in $C$, the equivalence holds. Consider now the
points $(\alpha_i,0)_{i\in \overline{C}}$ and $(\beta_j,y_j)_{j\in
  R}$. Together these amount to $n$ points which we know all lie on
$g(\cdot)$. This means we can use Lagrange interpolation to compute
$g(\cdot)$, and from this we can compute $\vec{x}_C$ as
$\{\alpha_i,\vec{x}_i\}_{i\in C}$ also lie on $g(\cdot)$. We have thus
shown that $M_R^C$ is surjective which means it is invertible, and as
this result holds for all possible choices of $R$ and $C$, $M_R^C$ is
hyper-invertible. \qed

\paragraph{Usability:} The hyper-invertible matrix has the following
very nice property which is later on used to prove different security
parameters:

\begin{lemma}\label{lemmaHIM}
  Given a hyper-invertible n-by-n matrix $M$ and $(y_1,\ldots,y_n)=
  M(x_1,\ldots,x_n)$, we have that for any index sets $A,B \subseteq
  \{1,\ldots,n\}$ where $|A|+|B| = n$, we can use $M$ and
  $\{x_i\}_{i\in A}, \{y_i\}_{i \in B}$ to compute $\{x_i\}_{i\not\in
    A}, \{y_i\}_{i\not\in B}$
\end{lemma}

\noindent \textit{Proof:} Given $\vec{y} = M\vec{x}$ and $\vec{y}_B =
M_B\vec{x} = M_B^A\vec{x}_A +
M_B^{\overline{A}}\vec{x}_{\overline{A}}$ we want to find
$\vec{x}_{\overline{A}}$, so we express it as: $\vec{x}_{\overline{A}}
= (M_B^{\overline{A}})^{-1}(\vec{y}_B-M_B^A\vec{x}_A)$. Since any
row/column combination of $M$ is going to be invertible (such that $M$
is square), we know that $(M_B^{\overline{A}})^{-1}$ exist.  We can
then compute $\vec{y}_{\overline{B}}$ by the following expression:
$\vec{y}_{\overline{B}}=
M_{\overline{B}}\vec{x}=M_{\overline{B}}^A\vec{x}_A+M_{\overline{B}}^{\overline{A}}\vec{x}_{\overline{A}}$
\qed

\subsection{Lacks and absences}
In this section I will briefly touch upon issues which is not directly
part of the protocol, but will be necessary in an actual
implementation in a real world scenario.

\paragraph{Authentication.} We assume that the players are on a secure
network connection with broadcast available, and also that it is
authenticated. Here follows my suggestions as to what could be done
about these assumptions.

Broadcasting can be handled by implementing Dolev-Strong using
timeouts. This requires signatures as well as synchronized timers. The
signature part can be handled by making sure the infrastructure is
there to support it, namely by having a public-key crypto system
installed. This way, every time a party $P_i$ sends something, $P_i$
signs the message, enabling other parties who has $P_i$'s public key
to verify that it is indeed $P_i$ who send this message. The
synchronized timers need to be installed beforehand as well, since you
cannot do timeouts without the clocks being synchronized if you want
the results to be correct. How to do this goes beyond the scope of
this thesis, so suffice to say they are needed.

The system would still suffer from replay attacks, and the remedy for
this could be using SSL for communicating or using message counters
increasing with every sent message.

\subsection{Player Elimination}
First I present two important definitions used throughout this thesis:

\begin{mydef}
  A protocol is \textnormal{\textbf{detectable}} if it is only secure
  against a passive adversary and can thus produce incorrect output if
  we allow an active adversary. This will be detected, however, by at
  least one honest player who then sets his happy-bit to unhappy,
  indicating that he detected an error in the protocol.

  The \textnormal{happy-bit} is a private boolean value owned by each player
  indicating if he detected any faults or not, i.e. if he thinks the
  player elimination should commence or not.
\end{mydef}

\begin{mydef}
  A protocol is \textnormal{\textbf{robust}} if it is perfectly secure
  against an active adversary who can corrupt $t<\nicefrac{n}{3}$
  players. This means that a robust protocol always yields correct
  outputs without leaking any information to the adversary.
\end{mydef}

In order to make the protocol secure and to keep the communication
cost low, we make use of player elimination. The main protocol is
going to have a preparation phase and a computation phase. The idea is
to make the detectable protocols robust by getting rid of any possible
corrupted parties in the preparation phase if they send corrupt data
during the preparation phase. If the corrupted parties do not try to
create faults during the preparation phase, we will have gotten the
needed building blocks from the preparation phase for the computations
to be secure during the computation phase. The player elimination
technique thus allows us to transform a detectable protocol into a
robust protocol.

All protocols in the preparation phase is going to be detectable which
means we will know if someone is corrupt. If this is the case, we can
find a dispute between two players where we know for sure that at
least one of them are corrupt. We can now eliminate them from any
further participation in the protocol. It is here important to note
that only players can get eliminated. Users are always there, as we
need their input for the circuit and they only partake in robust
protocols, hence no need for elimination.

The player elimination works as follows: set $\w{P}'\leftarrow P,
n'\leftarrow n, t'\leftarrow t$. Now, to keep the communication cost
low, we split the computation up into segments. A segment proceeds as
follows: First run the detectable computations, then check if the
computations are unsuccessful due to corruption by using fault
detection. The corrupted player along with the necessary sacrifice of
an honest player are then eliminated using fault localization and the
segment is restarted. Fault detection works by making all honest
players agree if any honest player is unhappy and if so, start the
fault localization. If not, the segment was successful, and the
protocol continues on to the next segment. Fault localization is the
process of discovering a dispute set between two parties $E=\{P_i,
P_j\}$. These are then eliminated since we can guarantee that at least
one of them is corrupt. After this, we now work with a reduced set of
active players, so we update accordingly: $\w{P}' \leftarrow
\w{P'}\backslash E, n'\leftarrow n'-2, t'\leftarrow t'-1$. Note that
after a player elimination it will always be true that $n'>t+2t'$, as
$n'$ decreases by 2 and $t'$ by 1.

The communication cost is thus reduced if a corrupted party creates a
fault, since there will be two players less to send to. The fault
localization step is expensive in itself, but the adversary can only
cause this to happen $t$ times at most. 

One might wonder why we cannot just use player elimination in the
computation phase as well, but part of the fault localization step
consists of revealing everything to a single player. If used in the
computation phase, this would include the secret inputs. This is not
secure in any way, which is why we need every protocol in the
computation phase to be robust. 