\section{Evaluation}

\subsection{Workload and Configuration}
First, we evaluate the performance of {\ourSystem} using micro benchmarks
that issue both sequential and random reads and writes. We compare {\ourSystem}'s performance
to a Gaios-like system that we implement (denoted in the following as G'), and to an
 unreplicated local disk. Both G' and {\ourSystem} use the same code base; the only
significant difference is that G' forwards all updates and stores all blocks at all
replicas, while {\ourSystem} processes each block at $f+1$ of the $2f+1$ replicas.

Second, we evaluate {\ourSystem} in failure and recovery. We compare
{\ourSystem} with G' and Cheap Paxos in terms of availability,
performance in the face of failures, and recovery time.

We use five machines as servers and five machines as clients. We run our performance evaluation experiments for
two configurations: $f=1$ (three servers) and $f=2$ (five servers); we
run the recovery experiments with $f=1$.
{\ourSystem}'s design calls for using a disk array for data storage and an additional disk to store log and
metadata, but since our machines have only two
Western Digital WD2502ABYS 250GB 7200 RPM hard
drives, we evaluate {\ourSystem} in a configuration where one disk is used as preferred and reserve storage,
while the other stores the log and metadata.
 Each machine is equipped with a 4-core Intel Xeon X3220
 2.40GHz CPU and 3GB of memory. For all experiments, we allocate 96GB of logical
storage space replicated across nodes by the system under test.
All machines are connected with 1Gbps ethernet.

For each experiment, we make sure there are enough client processes
and outstanding requests to saturate the system; we make sure the
experiment is long enough so that the write buffers are full; and we
use the last 80\% of requests to calculate the stable throughput. In
all experiments, the read and write batches at each replica consist
of, respectively, 100 and 10 requests. The values of other parameters
(number of clients, number of outstanding requests per client, etc)
depend on the block size (4K, 64K, 1M) and workloads
(sequential/random write/read), and we do not list all of them. In
general, sequential workloads and small blocks need more outstanding
client requests to saturate the system; random workloads and big
blocks need fewer; and random workloads with small blocks need a
longer time to saturate the write buffer.  For example, for the 4KB
sequential write workloads, we use 30 clients, each with 200
outstanding requests, to saturate the system; for the 4KB random write
workloads, three clients with 200 requests each are enough, but we need to
run the experiments for three hours to measure the stable throughput; and
for the 1MB sequential write workloads, it takes just five clients with
60 outstanding requests each to saturate the system.



\subsection{I/O Throughput}
\label{Gnothi-throughput}

{\ourSystem} maximizes I/O throughput by executing reads and writes on
subsets of disks.

Figure \ref{graph:gnothi-random} shows the random I/O performance for $f=1$
and $f=2$. For random workloads, the bottleneck of the system is the seek time
for each replica's data disk.

For write operations, {\ourSystem} is
40-64\% faster than writing to local disk or to G' for $f=1$ and
53-75\% for $f=2$. {\ourSystem}'s advantage comes from only having to
perform the writes at 2/3 (for $f=1$) or 3/5 (for $f=2$) of the
nodes. As expected~\cite{Bolosky11Paxos}, the random write performance
of G' is close to that of a single local disk because all replicas
process all updates.

For read operations, {\ourSystem} and G' perform identically since they use the same read protocol.
{\ourSystem}/G' is 2.5-3.4 times faster than a single local disk for $f=1$
and 3.3-6.1 times faster for $f=2$, because it executes each read on one replica.  For small
requests, the improvement factor can exceed $2f+1$ since each replica is responsible for
$1/(2f+1)$ of the data, and thus the average seek time is reduced.


Note that for small random I/O, the local per-disk write bandwidth
significantly exceeds the corresponding read bandwidth. The reason is that, once writes are committed to the log,
we can buffer large numbers of writes before writing them back to the
data disk, allowing the disk scheduler more opportunities to minimize seek and rotational
latency. Reads, on the other hand, must be processed immediately, so the scheduler
has fewer opportunities for optimization. Taking for example the 4KB random workload,
a local disk can process 383 random writes per second, while it can only process about 155
random reads per second if there are 100 concurrent read requests.


\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/gnothi_random.pdf}}
\caption{\label{graph:gnothi-random} Random I/O with 3 ($f$=1) and 5 ($f$=2) servers.}
\end{figure}


Figure \ref{graph:gnothi-burst} shows the effect of a burst of random writes
when $f=1$ and the system buffers are not full. During the first few
seconds, since writes are logged to the logging disk, buffered in
memory, but not bottlenecked by flushing to the data disk,
{\ourSystem}'s throughput is much higher than that of the data disk
write back. Then, when the operating system detects that more than
10\% of the system memory is dirty, it begins to write back data to
disk at the same rate it receives new requests, and {\ourSystem} slows
down. Figure~\ref{graph:gnothi-random} shows the stable write throughput,
where, to eliminate the effects of the initial  spike, we run our experiments
for sufficiently long (more than 3 hours) and calculate the throughput
of the last 80\% of requests.
% , we measure the stable throughput
% of read and write operation

% stable performance
% after the burst of writes completes. To do that, we run the experiment
% for sufficiently long (more than 3 hours) and calculate the throughput
% based on the last 80\% of the requests.

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/gnothi_burstWrite.pdf}}
\caption{\label{graph:gnothi-burst} Burst writes. In the default configuration, Linux starts to flush
dirty data to disk if 10\% of total system memory pages are dirty.}
\end{figure}

Figure~\ref{graph:gnothi-sequential} shows the sequential I/O
performance with $f=1$ and $f=2$.

For the sequential write workload with $f=1$, {\ourSystem} can achieve about 60MB/s with a 4KB block size
and about 90MB/s with a 1MB block size. The bottleneck for 4KB block size is probably ZooKeeper's
agreement, which is processing about 15K updates per second. For 1MB requests, our profiler shows
that the bottleneck is probably Java's memory allocation and garbage collection,
so customized memory management or a C implementation may achieve better performance.
Compared to G', {\ourSystem} is 44\% to 56\% faster because {\ourSystem} directs writes
to subsets of nodes.

For the read workload, {\ourSystem}/G' can achieve a total bandwidth of about 250MB/s with
1MB blocks. One problem with reads is that if we use only 1 client, the client
network becomes a bottleneck, and if we use multiple clients, then the workload
is not fully sequential. This problem is more severe for small requests.


Compared with the $f=1$ case,  {\ourSystem}'s throughput for 64K and 1MB writes increases by
about 10\% when $f=2$. In the 4KB case the
bottleneck is agreement, so there is almost no improvement.  G's
throughput slightly decreases since its replication cost is
higher. For reads, {\ourSystem}/G' scales throughput by nearly a factor
of 4 compared to a single disk.


\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/gnothi_sequential.pdf}}
\caption{\label{graph:gnothi-sequential} Sequential I/O with 3 ($f$=1) and 5 ($f$=2) servers.}
\end{figure}


\subsection{Failure Recovery}

{\ourSystem} does three things to maximize availability and recovery
speed. First, it fully replicates metadata, allowing the system to
remain continuosly available in the face of up to $f$ failures despite
partial replication of data. Second, partial replication of data
reduces recovery time, because the recovering node only needs to
fetch $(f+1)/(2f+1)$ (e.g. 2/3 for $f=1$) of the data. It also
improves performance during recovery, because once metadata is
restored, full block updates are only sent to and executed on the
block's preferred quorum.  Third, separation of data and metadata
improves system throughput during recovery and reduces recovery
time. The recovering node can catch up with other nodes even if they
continue to process new updates at a high rate. In particular, since
processing metadata is faster than processing full requests, Phase 1
of recovery can always catch up with missed and new requests. Once
Phase 1 is complete, the recovering replica no longer falls behind as
new requests are executed, since it can process and store all new block
updates directed to it, while it fetches old update bodies for all
\emph{INCOMPLETE} blocks in its preferred slices.

Figures~\ref{graph:gnothi-catchup} and \ref{graph:gnothi-rereplicate} look at two
recovery scenarios. Figure~\ref{graph:gnothi-catchup} shows the case when a node
temporarily fails and then recovers by fetching just the updated blocks it missed.
Figure~\ref{graph:gnothi-rereplicate} shows the case when a node permanently fails and
is replaced by a new node that must fetch all data from others. We run both experiments
with $f=1$, 4KB blocks, and a sequential write workload. We choose
the sequential write workload because it is the most challenging workload for recovery,
since during recovery the clients are writing new contents at a high speed, which consumes
a large portion of the network and disk bandwidth from the servers.

In Figure~\ref{graph:gnothi-catchup}, we kill one server 60 seconds after the experiment starts
and restart it 60 seconds later. Here both \ourSystem~and G' suffer a brief drop in throughput while
they wait for timeout and then continue without the failed node.
  After the replica restarts at time 120, it takes about 110
seconds (to time 230) to recover from its local disk (mainly replaying logs), and about 22 seconds (to time 252)
to join the agreement protocol. Then {\ourSystem} spends 26 seconds (to time 278) in Phase 1, during
which the recovering replica fetches write metadata (but not data) and marks all updated
blocks as \emph{INCOMPLETE}. Once Phase 1 completes, the recovering replica begins
servicing new requests, writing new writes to its local state, and marking updated blocks
as \emph{COMPLETE}. After Phase 1 completes, the recovering replica also begins
Phase 2 of recovery by fetching from other replicas \emph{INCOMPLETE} blocks in its
preferred slices. Phase 2 completes at time 530, at which point recovery is complete,
and {\ourSystem} returns to its original throughput.

G's throughput starts at 50MB/s and remains the same while the
failure occurs. After the replica resumes operation, in order to complete the recovery at time
530, it must throttle the rate at which it services new requests to about 16 MB/s.

Cheap Paxos is unavailable from time 30 to 230, since there is only one available
replica and since it does not have sufficient time to copy 96GB to a spare machine.
When the replica resumes operation, Cheap Paxos can immediately go back to normal (time 230) since
it does not process any new requests during the failure period.

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/gnothi_catchup.pdf}}
\caption{\label{graph:gnothi-catchup} Failure recovery (catch up).}
\end{figure}

 In Figure~\ref{graph:gnothi-rereplicate},  one server is killed 300 seconds after the
experiment is started and is replaced 300 seconds later by a new server whose
local disk is initialized and needs to be fully rebuilt. {\ourSystem} takes about 80
seconds\footnote{This may seem like a long time considering that the size of metadata is about 576MB.
This figure is due to three reasons: first, the metadata storage uses a design based on Bigtable (see
Section~\ref{gnothi-metadata}), which logs metadata first and garbage-collects outdated log entries
periodically in the background: in such a design, the actual size of metadata to be scanned is usually larger
than 576MB, because certain outdated metadata may not have been garbage-collected yet.
Second, although ideally, disk scan and network transfer of metadata should be
parallelized to maximize throughput, we have not implemented this optimization. Third,
clients may still send requests to replicas during Phase 1, causing contentions on the network.}
in Phase 1 to fetch metadata from the primary.  After Phase 1 completes,
the recovering replica begins servicing new requests, and at the same time,
re-replicating its disk by fetching blocks from others. The recovering replica completes
re-replication at time 3400, and during this period, it can service new requests
at a rate of about 48 MB/s.

G' can also complete recovery at time 3400, but during this period, it can only
service new requests at a rate of about 16 MB/s.

Cheap Paxos is unavailable before re-replication completes, but since it uses
all its bandwidth to perform recovery, it ends re-replication at time 2400.

Comparing Figure~\ref{graph:gnothi-catchup} and Figure~\ref{graph:gnothi-rereplicate},
{\ourSystem}'s catch-up recovery takes less time than full re-replication (410 seconds vs 2800 seconds),
but catch-up inflicts a bigger hit on throughput because when re-replicating
all blocks, the disk accesses are always sequential, and when re-replicating a subset of them,
the disk accesses may be random. This means the recovery cost per block is smaller in
full re-replication, though the total number of blocks to be fetched is larger, and this results
in higher client throughput but longer recovery time.

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/gnothi_rereplicate.pdf}}
\caption{\label{graph:gnothi-rereplicate} Failure recovery (re-replicate).}
\end{figure}

Both {\ourSystem} and G' can divide resources between servicing new
requests and fetching state for recovery by tuning the time
interval (in milliseconds) for a replica to issue a 16 MB state fetch request: a smaller number
means more aggressive recovery. In Figures~\ref{graph:gnothi-catchup} and \ref{graph:gnothi-rereplicate},
we configure this parameter so that {\ourSystem} and G' can recover in similar time, while
still providing reasonable throughput for new requests. In Figures~\ref{graph:gnothi-rereplicate2}
and \ref{graph:gnothi-rereplicate3}, we show the effect of different configurations.

Figure~\ref{graph:gnothi-rereplicate2} shows that {\ourSystem} can always catch up,
so the administrator can tune this parameter to balance resources
used for recovery and for processing new requests. Conversely, if G' sets this parameter
too high (not aggressive), the recovering replica never catches up. For example, in
Figure~\ref{graph:gnothi-rereplicate3}, the replica in the experiment with parameter
600 does not catch up, since the recovery speed is similar to the
speed of processing new requests. {\ourSystem} is almost always better than G' in our experiments:
if recovery times are similar, {\ourSystem} can provide better throughput during recovery; and if
throughputs are similar, {\ourSystem} can recover faster.

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/gnothi_rereplicate2.pdf}}
\caption{\label{graph:gnothi-rereplicate2} {\ourSystem} with different recovery values.}
\end{figure}

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/gnothi_rereplicate3.pdf}}
\caption{\label{graph:gnothi-rereplicate3} G' with different recovery values.}
\end{figure}







