\input{salus_fig_arch}

The architecture of \sys, as Figure~\ref{fig:exalt-arch} shows, bears
considerable resemblance to that of HBase. Like HBase, \sys uses
HDFS as its reliable and scalable storage layer, partitions key ranges
within a table in distinct regions for load balancing, and supports
the abstraction of a region server responsible for handling requests
for the keys within a region. As in HBase, blocks are mapped to their
region server through a Master node, leases are managed using
ZooKeeper, and \sys clients need to install a {\em block driver} to
access the storage system, not unlike the client library used for the
same purpose in HBase. These similarities are intentional: they aim to
retain in \sys the ability to scale to thousands of nodes and tens of
thousands of disks that has secured HBase's success. Indeed, one of the
main challenges in designing \sys was to achieve its robustness goals
(strict ordering guarantees for write operations across multiple
disks, end-to-end correctness guarantees for read operations, strong
availability and durability guarantees despite arbitrary failures)
without perturbing the scalability of the original HBase design. With
this in mind, we have designed \sys so that, whenever possible, it
buttresses architectural features it inherits from HBase---and does so
scalably. So, the core of \sys' active storage  is a three-way replicated
region server (\rrs), which  upgrades the original HBase region server
abstraction to guarantee safety despite up to two arbitrary server
failures. Similarly, \sys' end-to-end verification is performed within
the familiar architectural feature of the block driver, though upgraded to
support \sys' scalable verification mechanisms.

Figure~\ref{fig:exalt-arch} also helps describe the role played by
our novel techniques (pipelined commit, scalable end-to-end
verification, and active storage) in the operation of \sys.

Every client request in \sys is mediated by the block driver, which
exports a virtual disk interface by converting the application's API
calls into \sys \get and \pput requests. The block
driver, as we saw, is the component in charge of performing \sys'
scalable end-to-end verification (see Section~\ref{sec:etoeprot}): for \pput
requests it generates the appropriate metadata, while for \get
requests it uses the request's metadata to check whether the data
returned to the client is consistent.

To issue a request, the client (or rather, its block driver) contacts
the Master, which identifies the \rrs responsible for servicing the
block that the client wants to access.  The client caches this
information for future use and forwards the request to that \rrs.  The
first responsibility of the \rrs is to ensure that the request commits
in the order specified by the client. This is where the pipelined
commit protocol becomes important: as we will see in more detail in
Section~\ref{sec:pipeline}, the protocol requires only minimal coordination
to enforce dependencies among requests assigned to distinct
\rrs{s}. If the request is a \pput, the \rrs also needs to ensure
that the data associated with the request is made persistent, despite
the possibility of individual \rs{s} suffering commission
failures. This is the role of active storage (see Section~\ref{sec:active}): the
responsibility of processing \pput requests is no longer assigned to a
single \rs, but is instead conditioned on the set of \rs{s} in the \rrs
achieving unanimous consent on the update to be performed.  Thanks to
\sys' end-to-end verification guarantees,  \get requests can instead be
safely carried out by a single \rs (with obvious
performance benefits), without running the risk that the client sees
incorrect or stale data.
