\section{Introduction}
The primary directive of storage---not to lose data---is hard to
carry out: disks and storage
sub-systems can fail in unpredictable ways~\cite{Ford10Availability,Bairavasundaram07AnalysisLatent,Bairavasundaram08AnalysisDataCorruption,Jiang08DisksDominant,Pinheiro07Failures,Schroeder07DiskFailures}, and so can the
CPUs and memories of the nodes that are responsible for accessing the
data~\cite{Schroeder09DRAM,Nightingale11Cycles}.
Concerns about robustness become even more pressing
as a result of the rapidly growing scale of modern storage systems:
big companies today can have
tens of data centers across the world, multiple independent
clusters inside each data center, and thousands of
machines inside each cluster. The growing number of
machines and the increasing complexity of software both create
greater opportunities for error and corruption.

Some recent systems have provided end-to-end
correctness guarantees on distributed storage despite arbitrary node
failures~\cite{mahajan10depot,Castro02Practical,Clement09Upright}, but
these systems are not scalable---they require each correct node
to process at least a majority of updates.  Conversely, scalable distributed storage
systems~\cite{anderson96serverless,lee96petal,anderson00interposed,ghemawat03google,maccormick04boxwood,chang06bigtable,calder11windows,hbase,Thekkath97Frangipani,Corbett12Spanner,Nightingale12FDS}
typically protect some subsystems like disk
storage with redundant data and checksums, but leave the other subsystems
vulnerable to single points of failures that can
cause data corruption or loss. My work, instead, tries
to provide a unique combination of robustness and
scalability to storage systems.

To achieve this goal, we're building two systems: the
first one explores how to design such a system and the second one
helps us evaluate at scale the prototype of the first one.

\begin{itemize}

\item \emph{A robust and scalable block store.}
We've built a system in the spirit of Amazon's popular Elastic Block
Store~\cite{amazonebs} but with unprecedented guarantees in terms of consistency, availability, and durability in the face of
a wide range of server failures (including memory corruptions, disk corruptions, firmware bugs, etc.). To
get there, we've developed new techniques that match the scalability and performance of today's systems
while eliminating the single points of failures that these systems introduce to achieve their efficiency.

\item \emph{An emulator for evaluating large-scale storage systems on small-to-medium infrastructures.}
A key challenge
when investigating new designs for large-scale storage systems is the ability to evaluate a prototype's
performance and scalability: it is hard for research organizations to have access to testbeds large
enough to account for the size of today's industrial deployments, never mind future ones. We will leverage
the observation that the behavior of storage systems often does not depend on the actual data being stored
to build an emulator that, by compressing data, can run a large number of unmodified nodes on a small
number of machines and help reveal problems that are not likely to occur in small-scale experiments.

\end{itemize}

\section{Salus: a robust and scalable block store}

We've built \sys\footnote{\sys is the Roman goddess of safety
  and welfare.}, a scalable block store in the spirit of
Amazon's Elastic Block Store (EBS)~\cite{amazonebs}: a user can
request storage space from the service provider, mount it like a local
disk, and run applications upon it, while the service provider
replicates data for durability and availability. What makes \sys
unique is its dual focus on {\em scalability} and {\em robustness}.
Our aim is for \sys to provide strong end-to-end correctness
guarantees for read operations, strict ordering guarantees for write
operations, and strong durability and availability guarantees despite
a wide range of server failures (including memory corruptions, disk
corruptions, firmware bugs, etc); to allow \sys to scale these
guarantees to thousands of machines and tens of thousands of disks.


To achieve high scalability and efficiency, modern scalable storage
systems~\cite{hbase,calder11windows,chang06bigtable,Corbett12Spanner,Nightingale12FDS}
share many similar design principles.
However, combining these design principles with strong robustness guarantees presents several challenges.
Indeed, one of main challenges in designing \sys
is to meet its robustness goals without perturbing the scalability of existing
systems.

\begin{itemize}

\item To achieve high scalability, scalable storage systems incorporate
a large number of storage servers to process requests in parallel. However, parallel writes
to different storage servers can violate the ordering guarantees of the client,
as ``later'' writes may survive a crash, while ``earlier'' ones of the same
client are lost. To order client requests, traditional replication techniques~\cite{Bressoud96Hypervisor,Budhiraja92Primary,Cully08Remus,Lamport98Part,Lamport01Paxos,Bolosky11Paxos,Castro02Practical,Clement09Upright}
require a single serialization point but such serialization point will become a scalability bottleneck of the system.

\item To maximize throughput and availability for
reads while minimizing latency and cost, scalable storage
systems execute read requests at just one replica. If
the chosen replica experiences a \emph{commission failure} that
causes it to generate erroneous state or output, the data
returned to the client could be incorrect. End-to-end
verification techniques have been introduced to detect
such errors~\cite{sun06zfs,Fu02fastand,li04secure,mahajan10depot}.
However, none of these systems is designed to scale to thousands of
machines, because,  to support multiple clients sharing a
volume, they depend on a single server to update the Merkle tree~\cite{Merkle80Protocols}.
On the other hand, Byzantine Fault Tolerance (BFT)
techniques~\cite{Castro02Practical,Clement09Upright} achieves safe read by reading from multiple replicas
and comparing the replies, but they require multiple reads instead of one.

\item To simplify the design and reduce cost many systems that
replicate their storage layer for fault tolerance
leave unreplicated the computation nodes that can
modify the state of that layer: hence, an error at a single
computation node can irrevocably
and undetectably corrupt data. These computation nodes
are often used in tasks like data forwarding and garbage collection.

\item Additional robustness should ideally not result
in higher replication cost. Most industrial systems~\cite{hbase,calder11windows,chang06bigtable,Nightingale12FDS}
use $f+1$ replicas to tolerate $f$ failures,
while strong robustness techniques usually require more replicas:
asynchronous replication~\cite{Lamport98Part,Lamport01Paxos,Bolosky11Paxos} uses
$2f+1$ replicas to tolerate timing errors and BFT replication~\cite{Castro02Practical,Clement09Upright}
uses $3f+1$ replicas to tolerate arbitrary failures.
Different from these approaches, \sys tries to provide
the best protection with $f+1$ replicas.

\end{itemize}

To address these challenges,  \sys uses an architecture similar to scalable key-value stores like
Bigtable~\cite{chang06bigtable} and HBase~\cite{hbase} to achieve scalability to thousands
of machines; and to achieve the robustness goals, we've introduced three key techniques:
pipelined commit, scalable end-to-end verification, and active storage.

{\bf Pipelined commit.} The pipelined commit protocol allows writes to
proceed in parallel at multiple servers but, by tracking the necessary dependency
information during failure-free execution, guarantees that, despite failures,
the system will be left in a state consistent with the ordering of writes
specified by the client. To achieve this
without affecting performance, \sys commits requests through a novel
2PC-like protocol~\cite{Gray78Notes,Lampson76Crash} that eliminates a disk write in the common case for
greater complexity in the recovery protocol, which is usually a good
tradeoff.


{\bf Scalable end-to-end verification.} Each client maintains a Merkle tree
so that it can validate that each read request returns consistent and
correct data: if not, the client can reissue the request to another replica.
Reads can then safely proceed at a single replica without leaving clients
vulnerable to reading corrupted data. Further, our Merkle tree, unlike
those used in other systems that support end-to-end verification, is scalable:
each server only needs to keep the sub-tree corresponding to its own data,
and the client can rebuild and check the integrity of the whole tree even
after failing and restarting from an empty state.

{\bf Active Storage.} To prevent a single computation node from
corrupting data, \sys replicates both the storage and the computation
layer; further \sys applies  an update to the system's persistent state only
if the update is agreed upon by {\em all} of the replicated
computation nodes. We make two observations about active
storage. First, perhaps surprisingly,  replicating the computation
nodes can actually {\em improve} system performance by moving the
computation near the data (rather than vice versa), a good choice when
network bandwidth is a more limited resource than CPU cycles.  Second,
by requiring the {\em unanimous consent} of all replicas before an
update is applied, \sys provides better protection without increasing
replication overhead: \sys remains safe (i.e.~keeps its blocks consistent and
durable) despite two {\em commission} failures with just three-way
replication---the same degree of data replication needed by existing systems to
tolerate two permanent {\em omission} failures.  The flip side, of
course, is that insisting on unanimous consent can reduce the times
during which \sys is live (i.e.~its blocks are available)---but
liveness is easily restored by copying the data of any of the
available replicas to a new replica set.


We have prototyped \sys by modifying the HBase
key-value store~\cite{hbase}.
We use fault injection to evaluate the robustness of \sys and the result
confirms that \sys can
tolerate servers experiencing commission failures like
memory corruption, disk corruption, etc.
And the performance evaluation shows that stronger protection does not
come with a higher cost: \sys' overheads are low in all of our experiments
and it can outperform HBase when disk bandwidth is
plentiful compared to network bandwidth. For example,
\sys' active storage protocol allows it to halve network bandwidth
while increasing aggregate write throughput by a
factor of 1.74 in a well-provisioned system. However, we do not
have enough machines to fully evaluate the scalability of
\sys and this motivates my next project.

\section{The Exalt emulator}

Evaluating the design of a large scale
storage system is hard: most research organizations do not have enough
resources to build a large enough testbed. This is certainly the case
for academic researchers, but even for researchers in industry, who
may theoretically tap a larger resource pool, reserving a substantial
amount of it for testing their ideas may not prove economically
feasible. Today, the largest academic Emulab deployment we are aware
of~\cite{probe} has about 1,000 machines and 1PB of disk space, which
is about two orders of magnitude smaller than the state-of-the-art for
large-scale storage in industry; the mismatch is
actually worse, since by their nature researchers tend to be
interested in systems that are {\em larger} that what is currently
deployed. Academic researchers contemplating the design of storage
systems that can scale to exa-bytes face therefore at least a
three-order-of magnitude {\em experimental gap} between what they
need and what they are likely to be able to get.


One common approach used in practice to address this mismatch is to
run the target system on a small cluster and
use the observed performance to predict its behavior at
scale. Unfortunately, this approach can miss bugs that only manifest
in high stress tests~\cite{Mogul06Emergent} and can be inaccurate when
overhead does not grow linearly or when different sources of overhead
grow at different rates as the system scales. A second common technique is to modify the
system to substitute certain components with mocks or stubs of
significantly simpler logic and lower resource usage in order to test
the remaining components at a bigger scale. This approach has
limitations as well: it is often hard to faithfully
reproduce in a stub the complex interactions that the original components may have had with the rest
of the system.

In DieCast~\cite{Gupta08DieCast} the experimental gap is addressed
using time dilation~\cite{Gupta06Infinity}. DieCast runs multiple
processes in virtual machines on a single host and slows down each
process by a constant factor; finally it compensates for this slow down by
magnifying the measured throughput by the same factor. However,
DieCast only addresses time scaling and does nothing to reduce the
large quantities of disk space necessary to evaluate large-scale
storage systems.

The system that comes closer to address the experimental gap for
storage systems is David~\cite{Agrawal11Emulating}. David leverages
the key observation that to evaluate a local file system it is not
necessary to store the actual data. David only stores the file system'
metadata: the data is discarded and regenerated on reads on demand
according to predefined rules. This technique allows David to evaluate
local file systems of much larger sizes than that of the local disk on
which they are run. Unfortunately, this approach cannot be easily
applied to distributed storage services. For example, when users write
a key-value pair to HBase, the HBase region server layer adds a
timestamp and a region identifier to the write request and stores this
metadata, together with the users' data, on the local file system of
an HDFS datanode. David would erase the content of such files, but what
looks like data to HDFS is actually metadata for HBase: it is critical
for the execution of the request and cannot be discarded.


Our emulator draws inspiration from the same
observation that is at the core of David: the behavior of a storage
system is often not sensitive to the stored data. Indeed, it is common
to use files of all zero bytes
during performance evaluation of storage systems. The problem, as we
have seen, is that what looks like data to a low layer of a
distributed storage system may be metadata depended upon by a higher
layer. Our approach to solve this dilemma is simple: use lossless
data compression. This approach allows us to reap space-savings
benefits comparable to David: files with all zero bytes or highly
regular patters could be compressed at a very high ratio (1000:1 or
more) without any danger of losing any precious metadata.

Although the basic idea is simple, to realize an effective emulator
for large-scale storage systems we will need to address several
challenges:

\begin{itemize}

\item We need to identify lossless compression techniques that do not
in turn become bottlenecks in the system. Modern general-purpose
compressing algorithms like
Lempel-Ziv~\cite{Ziv77Universal,Ziv78Compression}, Prediction by
partial matching (PPM)~\cite{Cleary84Data,Moffat90Implementing}, and
Sequitur~\cite{Manning97Identifying} are CPU intensive. Our preliminary experiments show that even
a naive compressor that just scans for zeroes can introduce
significant overhead.  Therefore, instead of designing a general-purpose compressor,
we will design specific compressors for each
specific workload: for a specific workload, the number of types of
messages are finite, so that we can know the structure of each message
in advance and use this information to perform more efficient
compressing. For example, in our HDFS experiment, for a 64KB write
message, the zero bytes always start from offset 533 and continue to
the end of the message. Knowing this information allows our compressor
to eliminate the majority of the byte scans. We plan to perform the message
format analysis automatically: we can first run the system
with the general-purpose compressor, such as the zero bytes scanner, and
collect the results. Then we can analyze the messages and their
compressing results to find the correlations.

\item We need to provide the compressing and decompressing functions
transparently to the target system: we hope to reduce the code
modification to minimum. First, we need to incorporate compression at
the network, file systems, and memory interface.  The network
interface is the simplest as we just need to compress a packet before
it is sent and decompress it after it is received; the file system
interface is more challenging since a user can partially modify the
data it wrote previously and re-compressing the corresponding data may
require moving all the data
that follows it.  To solve this problem, we plan to use an approach
similar to that of LFS~\cite{rosenblum92lfs}, turning a random write into
an append operation. Memory access
has no clear interface, which complicates our implementation.  Our
plan is to add two functions: compress and decompress to the target
objects to be invoked respectively after the object is updated and
before the object is accessed.  In addition, to incorporate these
functions into the target system automatically, we plan to use code
instrumentation techniques.

\item Since compressing changes the system's behavior, how can we
still get meaningful result from the emulation? Depending on what we
are interested in testing, we imagine three ways in which we could
leverage Exalt. First, the pure emulation mode: all nodes run with
compression and decompression enabled while we measure CPU cycles,
disk and network bandwidth, and memory footprint. This mode can be used
to reveal unexpected resource usage increase that hurts the
scalability of the system. Second, the emulation-real hybrid mode:
most nodes are run in emulation mode but some target nodes execute
with no compression. This mode is useful to (i) measure the accurate
throughput of servers that manage metadata, since there are usually few
of them and they often are the bottleneck of the system; and (ii) test
the stability of nodes under high pressure.  Finally,  the
emulation-simulation hybrid mode: we run all nodes in emulation mode
but add a latency to each network and disk access so that they match
the latency experienced with non-compressed data. This mode is useful
to estimate the throughput and latency of the system.

\end{itemize}


This emulator, if successful, will offer resource-limited academics a tool to test the performance
of their prototypes at a significantly larger scale than currently possible.



