\section{Implementation}
\label{impl}
\label{impl}
To experiment with the verifiable secret sharing scheme proposed
in\cite{Zhou:2005:APS:1085126.1085127}, we implemented the following components:
\begin{itemize}
	\item Verifiable secret sharing scheme
	\item Hybrid cryptosystem
	\item Standalone deployment of the basic secret sharing scheme
	\item Distributed deployment of the verifiable secret sharing scheme
	\item Experiments
	\item Unit tests
\end{itemize}

We used subversion as our version control system and use a google code project
\cite{ds-final-project} to host the source and other project artifacts.
The source code can be obtained from the following subversion repository. \\

http://ds-final.googlecode.com/svn/java/


\subsection{Verifiable Secret Sharing Scheme}

As shown in figure~\ref{alg}, this scheme requires us to use a two primes $p$ 
and $q$ where $p = 2q + 1$. Furthermore it requires two elements of 
$\mathbb{Z}_p^*$, $g$ and $h$ of order $q$. We used a set of known values for 
these parameters in our implementation. The values are defined in the 
edu.purdue.cs.ds.vss.Constants class.

The distribution of shares as defined in the scheme used in the APSS paper
\cite{Zhou:2005:APS:1085126.1085127} is shown in figure~\ref{fig:fig2}.

In implementing the verifiable secret sharing scheme we first implemented the 
share generator (edu.purdue.cs.ds.vss.Generator) which first splits a given value
into a set of share values ($\{s_i : 0 < i < l\}$, here $l = {^nC}_k$ is the 
number of share the generator generates, where $n$ and $k$ are parameters of the
system as defined in section 2.1) such that the modular addition of share values
equals the given share value. Furthermore, the generator accepts a random number 
associated with the secret value and it is split into shares 
($\{r_i : 0 < i < l\}$ as well.

Next the generator creates a verifiable sharing as an 
edu.purdue.cs.ds.vss.VerifiableSharing instance, where it contains Pedersen 
commitments\cite{Pedersen91} for using each $s_i$ and $r_i$
pair, and a Pedersen commitment of the main secret and the random value and lists
of $s_i$ values and $r_i$ values.

Then we generate sets of indices for each $n$ servers according to the $l:l$ 
sharing scheme defined in \cite{Zhou:2005:APS:1085126.1085127} and obtain $n$
lists that corresponds to the $n$ servers. These lists are generated using the 
edu.purdue.cs.ds.ServerSets and edu.purdue.cs.ds.vss.IndexSets classes.

The edu.purdue.cs.ds.vss.Verifier class implements the functionality required
to verify a given verifiable sharing or a given share along with its commitments.

We further implemented a secret reconstruction mechanism to reconstruct the 
original secret using share sets from more than $k$ servers
(edu.purdue.cs.ds.vss.Reconstructor). This allows us to reconstruct shares after 
any number of share refreshings provided that we feed the complete share sets for
$k + 1$ servers where each server is in the same stage of share refreshing.

\begin{center}
\begin{figure}[here]
\resizebox{!}{8 cm}{\includegraphics{img/llscheme.png}}
\caption{$l:l$ Secret Sharing Scheme}
\label{fig:fig2}
\end{figure}
\end{center}

\subsection{Hybrid Cryptosystem}
When the dealer send send the original share values to each server and when 
servers send the subsharings to each other all messages are required to be 
encrypted and signed. 
We implemented the standard hybrid encryption scheme where
to the encryption would first create an ephemeral symmetric (AES) key which is 
used to encrypt the content and then this ephemeral symmetric key is encrypted 
using the recipient's public key. Each server's public keys are made known to 
each other when setting up the servers. 
We use SHA1/RSA signature where a SHA1 digest of the content is signed using the 
signer's private RSA\cite{RSA}  key.
The sign, verify, encrypt, decrypt functionality is implemented in the
edu.purdue.cs.ds.Cryptography class.


\subsection{Standalone Deployment}
To simulate and verify how the shares are being distributed among servers, we
developed a framework that can be used to test using a single machine, which 
allowed us to debug the implementation of the secret sharing scheme with ease.

This framework is based on Apache Axis2 \cite{axis2} web services engine. We 
start a single instance of a server which deploys a set of services representing
each server which will receive sharings. Each of these service has the following 
operations :
\begin{itemize}
\item $receiveShare$ : Receives a share from the dealer or another service and 
stores it in a local share store specific to that service.
\item $refresh$ : Carryout share refreshing where the service generate new share
sets for all services include itself, out of $each$ of its existing shares in the 
share store and sends these to the relevant services.
\item $getShareStore$ : Returns the complete store to the requester.
\end{itemize}

A dealer implementation was developed as a client to these services which 
initially gave the servers all sets of initial sharings. After this using a 
multithreaded client we call refresh to notify each service to start share 
refreshing. After the end of a share refreshing phase (where the response to the
blocking refresh request is "OK"), we run an administrator client to obtain 
share stores (where each service will use the $toDisk()$ serialization methods 
to generate the response payload) from $k + 1$ services and attempt to 
reconstruct the main secret used by the dealer.

\subsection{Distributed Deployment}

In addition to developing a standalone version of the secret sharing scheme,
we give a fully distributed implementation. All servers and dealers are connected
by private and authenticated TCP connections. The protocol proceeds through
a series of message exchanges allowing participants to exchange and verify
shares.\\ \indent
Initially, the dealer $d$ sends a \emph{verifiable share} message to all servers $p_i$,
containing the first sharing of the original secret $s$. That is:

\begin{equation}
\forall p_i: d \rightarrow p_i : \langle \textit{ verifiable share }, 
\Lambda, i, E_{p_i}(\{j, S_i[j],R_i[j]) | j \in I_{p_i} \}\rangle_d
\end{equation}

The notation $\langle m \rangle_x$ denotes that party $x$ has digitally
signed the contents of $m$, and the notation $E_{p_i}(\cdot)$ denotes
encryption under a hybrid cryptosystem using RSA with ephemeral AES
keys.\\ \indent
Upon receiving a \emph{verifiable share} message, server $p_i$ verifies
that the message is correct by checking:

\begin{enumerate}
\item \emph{verifySignature}$(d)$
\item $\Lambda[j] = g^{S_i[j]}h^{R_i[j]} \mod p$
\end{enumerate}

After each server $p_i$ has received properly verified shares from the dealer $d$,
the subsharing protocol is initiated. This involves exchanging \emph{verify} and
\emph{verified} messages to distribute the original shares among the servers
and to prove that the subshares are valid.\\ \indent
Initially, each server $p_i$ with shares $s_i \in S_i$ creates a \emph{subsharing}
by running $\forall s_i \in S_i: s_i' = \textit{ generate }(s_i, n, p, q)$, which returns
a ${{n}\choose{k}}$ subsharing for each share $s_i$. Let $S_i'$ denote the set
of subsharings created by server $p_i$. Initially, $p_i$ distributes \emph{verify}
messages as follows:

\begin{equation}
\forall p_{j, j \neq i} : p_i \rightarrow p_j : 
\langle \textit{ verify } , p_j, p_i, 
\Lambda, k, \lambda_k, E_{p_j}(\{l, S_k'[l], R_k'[l] | j \in I{p_j}\})\rangle_{p_i}
\end{equation}

Recall that $\Lambda$ is the commitment vector for the original sharing $S$,
while $\lambda_k$ is the commitment vector for the subsharing $s_k'$ of 
share $s_k \in S$. All servers upon receiving a \emph{verify} message
perform the following checks:

\begin{enumerate}
\item $\Lambda[0] = \Pi_{i=1}^n \Lambda[i] \mod p$
\item $\lambda[0] = \Pi_{i=1}^n \lambda[i] \mod p$
\item $\Lambda[k] = \lambda_k[0]$
\item $\lambda_k[j] = g^{S_i'[j]}h^{R_i'[j]} \mod p$
\end{enumerate}

For all \emph{verify} messages that satisfy the above conditions, server $p_i$
sends a \emph{verified} message to the originating server $p_j$:

\begin{equation}
\langle \textit{ verified }, p_i, p_j, \Lambda, k, \lambda_k \rangle_{p_i}
\end{equation}

Upon receiving $2t+1$ \emph{verified} messages, the subsharing is considered
\emph{certified}.\\ \indent
This process is repeated \emph{ad infinitum}, with the subsharings from the last
certified subsharing used to generate new subsharings for the next round. In our
implementation, the number of share refreshing cycles $x$ to perform is governed
by the \emph{-refresh x} argument.\\ \indent

Note that servers only wait for responses from $2t+1$ servers, including the
server itself. Thus, the algorithm is guaranteed to make progress even in the
event of $t$ failures.\\ \indent

Due to the exponential expansion in the number of shares, more than two rounds
of share refreshing usually exhausts the receive buffer allocated by the operating
system. This could be remedied by cycling between \emph{send} and \emph{receive}
operations at servers, but was unnecessary for a proof-of-concept demonstration.
Overcoming the receive buffer limitation does not eliminate the necessary storage
of an exponentially increasing number of shares, however.

\subsection{Experiments}
All protocol experiments were carried out using the xinu machines.
The details of the experiments are discussed in the experiments section. We 
implemented a $benign$ $administrator$ (edu.purdue.cs.ds.BenignAdmin) who 
collects $k + 1$ share stores from each of the servers and then carries out the
reconstruction of the main secret using those stores.
The share stores used here are collected from the serializations of the share 
stores by each xinu server that took part in the protocol.

We further tested the performance of our secret sharing scheme as a demonstration
of its weakness of exponential subshare generation during share refreshing. This
was a standalone implementation where we maintain a set of share stores to 
represent each server in the sharing scheme and then we carried out several 
share refreshing iterations and calculated the time to generate shares by each 
server and the reconstruction time. Results of this experiment is included in
the experiments section.

\subsection{Unit Tests}

We developed unit tests to check each of the basic constructs we developed. 
\begin{itemize}
	\item edu.purdue.cs.ds.vss.VerifierTest 
	
	This tests the generation of a verifiable sharing using 
	edu.purdue.cs.ds.vss.Generator and verifying the commitments using 
	edu.purdue.cs.ds.vss.Verifier. We further test the generation of sub shares 
	and their commitment values.
	
	\item edu.purdue.cs.ds.vss.SubSharingTest
	\item edu.purdue.cs.ds.vss.SubSharingTest2
	
	These tests are used to test share reconstruction using the initial sharing,
	and further sub-sharings. These tests helped us verify that our 
	implementation of the share splitting and assignments to $n$ servers and 
	reconstruction using $k+1$ servers is correct.
	
	\item edu.purdue.cs.ds.vss.ShareTest
	\item edu.purdue.cs.ds.vss.ParentIndexTest
	\item edu.purdue.cs.ds.vss.ShareStoreTest
	
	These tests were used to test serialization of the share stores to the disk.
	
	\item edu.purdue.cs.ds.vss.AdminTest
	
	This tests the benign admin implementation where it generates a set of 
	share stores after share refreshing, stores them on disk and reads those
	share stores to verify that the original secret can be reconstructed using 
	$k+1$ of those share stores.
	
\end{itemize}


