\section{Design}
This section describes the implementations of persistent Paxos and
two-phase commit of ShardKV++.

\subsection{Persistent Paxos}
%Each replicated group of ShardKV++ uses Paxos to determine the
%execution order of operations. 
The Paxos log is in persistent storage and, in our design, contains
everything a server needs for recovery (including read/write
operations as well as shards informations). If a server restarts due
to power failure or system crash, it simply replays the log to be up
to date.

ShardKV++ treats each Paxos state of a particular instance number and
a particular server (acceptor) as a single file. Update to a Paxos
state is done by atomically writing to the corresponding file (see
Figure~\ref{fig:atomwrite}). Data is marshalled / unmarshalled using
standard Go packages. 

Each server also keeps a copy of Paxos states in memory for fast
read-only operations. The copy is created lazily like a cache -- the
server loads the state into memory when there is a miss. This can
reduce the overhead of a restarted server.

\paragraph{Performance and Reliability} Several optimizations have
been added to ShardKV++ to reduce network message exhange and I/O
operations. Figure~\ref{fig:paxosspeed} shows the average time
Lab3-Paxos (the original Paxos code that this work is based on) and
ShardKV++ take to complete all Lab 3 tests.

For performance reason, ShardKV++ does not flush data immediately to
disk at each write. Instead, the file system flush its in-memory cache
periodically. Program crash is not an issue since the flush will
eventually happen. In the case of a power failure, however, techniques
such as USP in Harp system~\cite{harp} can be applied.  Other
techniques such as~\cite{rio} can be used to survive operating system
crashes.

\begin{figure}[bh]
\centering
\small
\begin{tabular}{l|c|c|c}
\hline
 & \textbf{Persistent?} & \textbf{Immediate Flush?} & \textbf{Avg.
 Time} \\
\hline
Lab3-Paxos & No & No & 52.6s \\
ShardKV++ & No & No & 48.9s \\
ShardKV++ & Yes & No & 50.3s \\
%ShardKV++ & Yes & Yes & 390.8s \\
\end{tabular}
\caption{The average time for optimized ShardKV++ and its original Lab
3 implementation to complete all Paxos tests.}
\label{fig:paxosspeed}
\end{figure}

\begin{figure}[bh]
\centering
\small
\begin{tabular}{c|c|c}
\hline
 \textbf{Persistent?} & \textbf{Immediate Flush?} & \textbf{Avg.
 Time} \\
\hline
No & No & 1.1s \\
Yes & No & 2.5s \\
Yes & Yes & 45.1s \\
\end{tabular}
\caption{The average time for ShardKV++ to complete 100 sequential
Puts.}
\label{fig:paxosspeed2}
\end{figure}


\begin{figure}[h]
\centering
\small
\begin{tabular}{|ll|}
\hline
R1: & If X is missing but X.tmp.* and X.alt.* \\
& are present, rename a X.alt.uid to X \\
R2: & If X is present but X.tmp.* also exist, \\
& delete all X.tmp.* files and continue \\
R3: & If X is present but X.tmp.* also exist, \\
& delete all X.alt.* files and continue \\
\hline
W1: & Write data to temporary file X.tmp.uid \\
W2: & Rename X to alternate file X.alt.uid \\
W3: & Rename X.tmp.uid to X \\
W4: & Delete X.alt.uid and ignore failures\\
\hline
\end{tabular}
\caption{Atomic write to a file X. The {\em uid} are random 64-bit
integers generated at each write call. R1-3 are recovery and rollback
steps and W1-4 are write steps.}
\label{fig:atomwrite}
\end{figure}


\subsection{Atomic Transactions}
The atomicity of a transactions includes: (1) a transaction succeeds
with all puts being written on the corresponding participants or fails
with no effect on the system; (2) no puts on any of the keys included
in a transaction can happen during a transaction, including puts from
other transactions; (3) no reconfigurations can happen during a
transaction; (4) no transactions can happen during a reconfiguration.

The RPC interface for transactions is called Puts, taking as arguments
a list of key-value pairs. The RPC handler of Puts will act as the
coordinator in a two-phase commit protocol. The protocol is: (1)
assign a unique Transaction ID to this transaction; (2) calculate the
participants of this transaction; (3) send Prepare messages with
Transaction ID and key-value pairs to each participants; (4) if all
participants return OK in step (3), do step (5), otherwise do step
(6); (5) send Commit messages with Transaction ID to all participants,
wait until all return OK; (6) send Abort messages with Transaction ID
to all participants, wait until all return OK.

Each shardkv server has two tables to support transactions. The first,
called 'transactions', is a map from Transaction ID to list of keys.
It's used to look up the keys (on this server) involved in a
transaction identified by the Transaction ID. The second, called
'locks', is a map from key to value. It has double roles. Firstly it's
used as the table of locks. A key's being locked is represented as its
presence in 'locks'. During a transaction, all keys in the transaction
are locked to prevent writing. Secondly, it's used as the stage for
attempting writes. The values it stores are the values this
transaction is going to set. These values will be written to regular
storage in committing, or aborted in aborting.

The RPC handler for Prepare puts the prepare command into Paxos log,
in order for all replicas in this group to execute this command. When
executing, the server puts the key-values pairs associated with the
Prepare message into 'locks' table, and records the Transaction ID in
the 'transactions' table. The procedures of Commit and Abort handlers
are similar. When executing commit command, the server moves the
key-value pairs involved in this transaction from the 'locks' table to
regular storage, and deletes them from the 'locks' table, so as to
unlock them. When executing abort command, the server deletes the keys
in this transaction from the 'locks' table.

A subtle part of the two-phase commit protocol is that the coordinator
has to wait (maybe forever) for all commits or all aborts to succeed.
The reason is that if the coordinator didn't successfully tell a
participant to commit or abort, the keys in this transaction will be
locked forever on that participant. So a Puts call is a blocking call.
Right now the coordinator is only a single server, so it is a single
fail point in our system.

%The codes for ordinary Put and reconfiguration are changed to respect
%the transaction locks. We added test cases to cover all the atomicity
%requirements listed above.
