\section{Introduction}

We use the idea of ``Separating data and metadata''
to make storage systems more robust and scalable.

In concrete, we've built three systems that target at
different scales and goals.

{\bf Gnothi.} small-scale, omission only. It combines
the high availability of asynchronous replication
together with the low cost of synchronous replication. The key
is to separate data and metadata and ...

{\bf Salus.} large-scale, arbitrary failure. It provides strong...
Three key ideas: pipelined commit performs data transfer
in parallel and metadata transfer in sequential;
active storage replicates computing as well as storage;
scalable end-to-end check ....


{\bf Exalt.} evaluate large-scale systems on limited
resources. The key is to separate the metadata from
data so that data can be compressed. We're designing
a data structure to make data distinguishable from
metadata at different storage layers.

In conclusion, metadata and data have different
characteristics, access patterns, and are used
in different ways. It's often beneficial to process
and store them differently to achieve different
goals (eg. robustness and scalability) simultaneously.



\iffalse
This proposal describes how to build storage systems
that can scale to exa bytes of data and tolerate a wide
range of faults.

The primary directive of storage - not to lost data - is hard
to carry out: disks and storage sub-systems can fail in
unpredictable ways, and so can the CPUs and memories
of nodes that are responsible for accessing the data.
Furthermore, the growing scale and complexity of storage
systems create greater opportunities for error and corruption.

Most pratical storage systems today [GFS/Bigtable, HDFS/HBase,
Azure, Spanner, ....... ] do consider robustness as
one of the major design principles, but unexpected failures
still occur [Amazon, Azure] and can cost tens of millions of dollars
to those companies. Investigations of these systems demonstrate
that achieving the combination of robustness and scalability
presents several challenges and instead of solving these challenges,
existing systems usually choose to sacrifice robustness for
better efficiency and scalability.

First, to achieve high throughput, existing systems incorporate a large
number of servers to process different requests in parallel. Parallelism,
however, can violate the ordering guarantees of the client, as ``later'' writes
may survive a crash, while ``earlier'' ones of the same client are lost.
Such kind of violation can be fatal to applications like block store.

Second, aiming for efficiency and high availability at low cost can have
unintended consequences on robustness by introducing single points
of failure. For example, in order to maximize throughput and availability
for reads while minimizing latency and cost, scalable storage systems
execute read requests at just one replica. If the chosen replica experiences
a \emph{commission failure} that causes it to generate erroneous state or
output, the data returned to the client could be incorrect. Similarly, to simplify
the design and reduce cost many systems that replicate their storage layer
for fault tolerance leave unreplicated the computation nodes that can
modify the states of that layer: for example, a memory error or an errant
PUT at a single HBase region server can irrevocably and undetectably corrupt
all 3 replicas of data on HDFS datanodes.

Third, there is usually a tradeoff between robustness and replication cost:
synchronous replication that can only tolerate omission failures use $f+1$
replicas to tolerate $f$ failures; to tolerate asynchronous events, the replication
cost increases to $2f+1$; and to tolerate arbitrary failures, the cost further
increases to $3f+1$.  Most industrial deployments choose $f+1$ replication
and a few choose $2f+1$, which makes them vulnerable to commission
failures like memory corruption.

Finally, it's challenging to test the implementation in full scale simply because
it requires a lot of resources and thus economically infeasible. However, related
researches demonstrate that there are problems that only occur under
high stress. They include bugs that only manifest under high pressure, which
hurts the robustness of the system, and unexpected performance bottlenecks,
which hurts the scalability of the system.

To address these challenges this proposal introduces four key techniques:
pipelined commit, scalable end-to-end verification, robust and efficient
replication, and emulation test framework.

{\bf Pipelined commit.} The pipelined commit protocol allows writes to
proceed in parallel at multiple servers but, by tracking the necessary dependency
information during failure-free execution, guarantees that, despite failures,
the system will be left in a state consistent with the ordering of writes
specified by the client.

{\bf Scalable end-to-end verfication.} Each client maintains a Merkle tree
so that it can validate that each Read request returns consistent and
correct data: if not, the client can reissue the request to another replica.
Reads can then safely proceed at a single replica without leaving clients
vulnerable to reading corrupted data. Further, our Merkle tree, unlike
those used in other systems that support end-to-end verification, is scalable:
each server only needs to keep the sub-tree corresponding to its own data,
and the client can rebuild and check the integrity of the whole tree even
after failing and restarting from an empty state.

{\bf Efficient and robust replication.} This proposal designs two novel replication
protocols that use $f+1$ replicas to tolerate a wide range of failures:
\emph{Gnothi} protocol tolerates asynchronous events by separating data and metadata,
in which metadata is fully replicated to $2f+1$ replicas and data is partially
replicated to $f+1$ replicas. \emph{Active storage} protocol tolerates
arbitrary failures by requiring the unanimous consent of $f+1$ computation nodes;
if they cannot reach agreement, the system will replace them with $f+1$ other
nodes. Furtheremore, this proposal discusses how to perform geo-replication
to further improve robustness: data stored at the backup data centers must
be valid no matter when the primary data center fails.

{\bf Emulation test framework.} To test the implementation with limited resources,
our emulator compresses data on disk and network. The fact that storage testing
is usually not sensitive to the actual data allows us to achieve a high compressing
ratio (1000:1 or more) so that we can run a large-scale system with tens to hundreds
of machines.
\fi




