\section{Background}
\label{sec:background}

Salus' starting point is the
scalable architecture of HBase/HDFS, which Salus carefully modifies to
boost robustness without introducing new bottlenecks. We chose the
HBase/HDFS architecture for three main reasons: first, it
provides a key-value interface that can be easily modified to support a
block store; second, it has a large user base that includes
companies such as Yahoo!, Facebook, and Twitter; and third,
unlike other successful large-scale storage systems with similar
architectural features, such as Windows Azure Storage~\cite{calder11windows} and Google's
Bigtable/GFS~\cite{ghemawat03google,chang06bigtable}, HBase/HDFS is open source.



\paragraph{HDFS} HDFS~\cite{Shvachko10HDFS}
is an append-only distributed file system whose design is based on
Google's File System~\cite{ghemawat03google}. It stores
the system's namespace and membership information in
a {\em NameNode} and replicates the data over a
set of {\em DataNodes}. Each file consists of a set of
blocks, and HDFS ensures that each block is replicated across
a specified number of DataNodes (three by default) despite DataNode failures.
HDFS is widely used, primarily because of its scalability.

\paragraph{HBase} HBase~\cite{hbase} is a distributed key-value store whose design is based on
Google's Bigtable~\cite{chang06bigtable}. It exports
the abstraction of tables accessible through a \pput/\get
interface.  Each table is split into multiple {\em regions} of
non-overlapping key-ranges (for load balancing). Each region is
assigned to one {\em \rs} that is responsible for all requests to that
region.  \Rs{s} use HDFS as a storage layer to ensure that data is
replicated persistently across enough nodes. Additionally, HBase uses
a {\em \m} node to manage the assignment of key-ranges to various
\rs{s}.

\Rs{s} receive clients' \pput and \get requests and transform them
into equivalent requests that are appropriate for the append-only
interface exposed by HDFS.  On receiving a \pput, a \rs logs the
request to a write-ahead-log stored on HDFS and updates its sorted,
in-memory map (called {\em memstore}) with the new \pput.  When the
size of the memstore exceeds a predefined threshold, the \rs{}
{\em flushes} the memstore to a {\em checkpoint} file stored on HDFS.

On receiving a \get request for a key, the \rs looks up the
key in its memstore. If a match is found, the \rs returns
the corresponding value; otherwise, it looks up the key in various
checkpoints, starting from the most recent one, and returns the
first matching value. Periodically, to minimize the storage overheads
and the \get latency, the \rs performs {\em compaction} by
reading a number of contiguous checkpoints and merging them into a
single checkpoint.

\paragraph{ZooKeeper} ZooKeeper~\cite{Hunt10ZooKeeper} is a replicated coordination
service. It is used by HBase to ensure that each key-range is assigned
to at most one \rs.
