\section{Requirements and model}

\foosys provides the abstraction of a large collection of virtual
disks, each of which is an array of fixed-sized blocks. Each virtual
disk is a \emph{volume} that can be mounted by a client running in the
datacenter that hosts the volume.  The volume's size (e.g., several
hundred GB to several hundred TB) and block size (e.g., 4~KB to
256~KB) are specified at creation time.

A volume's interface supports \get and \pput, which on a disk
correspond to read and write.  A client may have many such commands
outstanding to maximize throughput. At any given time, only one client
may mount a volume for writing, and during that time no other client
can mount the volume for reading. Different clients may mount and
write different volumes at the same time, and multiple clients may
simultaneously mount a read-only snapshot of a volume.

We explicitly designed Salus to support only a single writer per
volume for two reasons. First, as demonstrated by the success of
Amazon EBS, this model is sufficient to support disk-like
storage. Second, we are not aware of a design that would allow \sys to
support multiple writers while achieving its other goals: strong
consistency,\footnote{More precisely, {\em ordered commit} (defined in
  Section~\ref{sec:consistency}) which implies
  FIFO-compliant linearizability.}  scalability, and end-to-end
verification for read requests.

Even though each volume has only a single writer at a time, a
distributed block store has several advantages over a local one.
Spreading a volume across multiple machines not only allows disk
throughput and storage capacity to exceed the capabilities of a single
machine, but balances load and increases resource utilization.
Further, storing data on multiple servers opens up
the opportunities to improve availability and durability: if a server
fails, the client can access another server holding a copy of the data;
if the client fails,
the user can quickly start a new client by recovering data from
the servers.


To minimize cost, a typical server in existing storage deployments is
relatively storage heavy, with a total capacity of up to
24\,TB~\cite{HadoopInFacebook,HadoopInYahoo}. We expect a storage
server in a \sys deployment to have ten or more SATA disks and two
1\,Gbit/s network connections. In this configuration disk bandwidth is
several times more plentiful than network bandwidth, so the \foosys
design seeks to minimize network bandwidth consumption.

\subsection{Failure model}
\label{sec:failure-model}

\foosys is designed to operate on an unreliable network with
unreliable nodes. The network can drop, reorder, modify, or
arbitrarily delay messages.

For storage nodes, we assume that 1) servers can crash and recover,
temporarily making their disks' data unavailable (transient omission
failure); 2) servers and disks can fail, permanently losing all their
data (permanent omission failure); 3) disks and the software that
controls them can cause corruption, where some blocks are lost or
modified, possibly silently~\cite{prabhakaran05iron} and servers can
experience memory corruption, software bugs, etc, sending corrupted
messages to other nodes (commission failure). When calculating failure
thresholds, we only take into account commission failures and
permanent omission failures.  Transient omission failures are not
treated as failures: in asynchronous systems a node that fails
and recovers is indistinguishable from a slow node.

In line with \sys' aim to provide end-to-end robustness guarantees, we
do not try to explicitly enumerate and patch all the
different ways in which servers can fail. Instead, we design \sys to
tolerate arbitrary failures, both of omission, where a faulty node
fails to perform actions specified by the protocol, such as sending,
receiving, or processing a message; and of commission~\cite{clement09upright}, where a
faulty node performs arbitrary actions not called for by the
protocol. However, while we
assume that faulty nodes will potentially generate arbitrarily
erroneous state and output, given the data center environment we
target we explicitly do not attempt to tolerate cases where a
malicious adversary controls some of the servers. Hence, we replace
the traditional BFT assumption that \emph{faulty nodes cannot break
  cryptographic primitives}~\cite{Rivest83Method} with the stronger
(but fundamentally similar) assumption that \emph{a faulty node never
  produces a checksum that appears to be a correct checksum produced
  by a different node.}  In practice, this means that where in a
traditional Byzantine-tolerant system~\cite{Castro00Proactive} we
might have used signatures or arrays of message authentication codes
(MACs) with pairwise secret keys, we instead {\em weakly sign}
communication using checksums salted with the checksum creator's
well-known ID.

\foosys relies on weak synchrony assumptions for both safety and
liveness. For safety, \foosys assumes that clocks are sufficiently
synchronized that a ZooKeeper lease is never considered valid by a
client when the server considers it invalid.
\foosys
only guarantees liveness during \emph{synchronous intervals} where
messages sent between correct nodes are received and processed within
some timeout~\cite{Burrows06Chubby}.


\subsection{Consistency model}\label{sec:consistency}

To be usable as a virtual disk, \sys must ensure the {\em
  standard disk semantics} provided by physical disks. These semantics
allow some requests to be marked as {\em barriers}. A disk must
guarantee that all requests received before a barrier are committed
before the barrier, and all requests received after the barrier are
committed after the barrier.
Additionally, a disk guarantees {\em freshness}: a read to a block
returns the latest committed write to that block.

\sys implements a stronger guarantee: When there are no more than $f$ (we use $f=2$ in Salus) failures,
\sys guarantees both freshness and a property we call {\em \oc}:
if a write request $R$ commits, then all write requests that were sent by
the client before $R$ are guaranteed to eventually commit. Note that \oc eliminates the need for
explicit barriers since every write request functions as a
barrier. Although we did not set out to achieve \oc and its stronger
guarantees, \sys provides them without any noticeable effect on
performance.

%Under severe failures \sys provides the weaker {\em prefix semantics}:
%in these circumstances, a client that crashes and restarts may observe
%only a prefix of the committed writes; a tail of committed writes may
%be lost. This semantics is not new to \sys: it is the semantics
%familiar to every client that interacts with a crash-prone server that
%acknowledges writes immediately but logs them asynchronously; it is
%also the semantics to which every other geo-replicated storage systems we
%know of~\cite{calder11windows,mahajan10depot,llyod11sosp} retreats when failures
%put it under duresse. The reason is simple: while losing writes is
%always disappointing, prefix semantics has at least the merit of
%leaving the disk in a legal state.  Still, data loss should
%be rare, and  \sys falls back on prefix semantics only in the following
%scenario: the client crashes, one or more of the servers suffer at the
%same time a commission failure, and the rest of the servers are
%unavailable.  If the client does not fail or at least one server is
%correct and available, \sys continues to guarantee standard disk
%semantics.
%Figure~\ref{fig:consistency} summarizes the consistency
%guarantees of \sys in various failure scenarios.

\sys mainly focuses on tolerating arbitrary failures (in the sense specified in Section~\ref{sec:failure-model})
of server-side storage systems, since they entail
most of the complexity and are primarily responsible for preserving the durability and availability
of data.
Client commission failures can also be handled using replication, but
this falls beyond the scope of this work.










