\section{Introduction}

An ideal storage system should achieve
multiple properties simultaneously: the system should guarantee
data durability and availability despite unexpected
hardware and software failures; it should be able to
scale to support a growing number of users; and it should
provide good performance with a reasonable cost.
However, when building such a storage system, one faces
a fundamental tension, as higher robustness typically incurs
higher costs and thus hurts both efficiency and scalability.

This proposal aims at answering two crucial questions on this
difficult road: how to design the system to achieve
multiple properties simultaneously? And how to validate the
claims of these properties (especially scalability) of our storage system prototypes? It
turns out that a simple principle---{\em separating data from metadata}---
is the key to answer both questions.

To answer the first question, this proposal shows that an approach to storage
system design based on separating data from metadata can yield systems
that address elegantly and effectively that tension in a variety of settings.
Two observations motivate our approach: first, data in storage systems is usually
big (4K to several MBs) while metadata is comparatively small (tens of bytes);
second, metadata, if carefully designed, can be used to validate data integrity.
These observations suggest an opportunity:  by applying the expensive techniques
that guarantee robustness against a wide range of failures only to metadata,
which has little effect on scalability, it may be possible to protect data as well
with minimal cost. We show how to exploit effectively this opportunity in two very
different systems:

\begin{itemize}

\item \emph{Efficient and available storage replication.}
We have designed a replication protocol that combines the high availability
of asynchronous replication and the low cost of synchronous replication
for a small-scale storage system. To achieve that, we separate data from metadata
so that the system executes data accesses on
subsets of replicas while using fully replicated metadata to ensure that
requests are executed correctly and to speed up recovery of slow or failed
replicas.

\item \emph{A robust and scalable block store.}
We have built a system in the spirit of Amazon's popular Elastic Block
Store~\cite{amazonebs} but with unprecedented guarantees in terms
of consistency, availability, and durability in the face of
a wide range of server failures (including memory corruptions, disk corruptions, firmware bugs, etc.). To
get there, we have applied the idea of ``separating data and metadata'' and developed new techniques
that match the scalability and performance of today's systems
while eliminating the single points of failures that these systems introduce to achieve their efficiency.
For example, our pipelined commit protocol transfers data in parallel to achieve scalability but
transfers metadata sequentially to provide ordering guarantees for write operations.
\end{itemize}

To validate the claims of scalability of our storage system prototypes, all researchers
working on scalable storage are facing a basic problem: we do not have enough resources
to run our prototypes in full scale.
Today, the largest academic testbed we are aware of has about 1,000 machines
and 1PB of disk space, which is about two orders of magnitude smaller than the
state-of-the-art for large-scale storage in industry; a gap that is likely to increase
in the future. To mitigate this gap, we're building an emulator for evaluating large-scale storage
systems on small-to-medium infrastructures. Once again, the key to our design is
separating data from metadata: 



\begin{itemize}
\item \emph{An emulator for evaluating large-scale storage systems on small-to-medium infrastructures.}
Our design is based on the observation that the behavior of storage systems often does not
depend on the actual data being stored but rather only on the metadata, which allows us to
perform testing with artificial data. Therefore, by separating data from metadata and
performing highly efficient compression on carefully designed data, our emulator can run a
large number of unmodified nodes on a small number of machines and help reveal problems that
are not likely to occur in small-scale experiments.
The key problem of this project is that modern storage systems usually mix metadata with data and
existing compressing algorithms are either inaccurate or inefficient on such mixed patterns. To solve
this problem, we are designing a data pattern that is highly compressible and easily distinguishable
from metadata at multiple layers of the system.
\end{itemize}






