
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{amscd}
\usepackage{amsfonts}
\usepackage{graphicx}%
\usepackage{fancyhdr}


\theoremstyle{plain} \numberwithin{equation}{section}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{conjecture}{Conjecture}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{finalremark}[theorem]{Final Remark}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{example}[theorem]{Example}
\newtheorem{question}{Question} \topmargin-2cm

\textwidth6.5in
\textheight8.5in
\setlength{\topmargin}{0in} \addtolength{\topmargin}{-\headheight}
\addtolength{\topmargin}{-\headsep}

\setlength{\oddsidemargin}{0in}

\oddsidemargin  0.0in \evensidemargin 0.0in \parindent2em



\pagestyle{fancy}\lhead{Research Statement} \rhead{December 2013}
\chead{{\large{\bf Yang Wang}}} \lfoot{} \rfoot{\bf \thepage} \cfoot{}

\newcounter{list}


\begin{document}

\raisebox{1cm}

The primary directive of storage--not to lose data--is hard
to carry out: disks can fail in unpredictable ways, and so can CPUs and memories.
Concerns about robustness become even more pressing as scalable storage systems
like Google's GFS, Bigtable, Megastore, and Spanner, Facebook's Haystack, and
Amazon's DynamoDB become more complex.
In practice, Google observes 1 corruption for every 5.4 peta bytes of data scanned  in
Bigtable. What is worse, the consequences of such corruptions are unbounded:
in the infamous 2008 Amazon outrage, a single bit flip caused the Amazon
S3 service to go down for 8 hours.

Strong protection techniques, such as Byzantine Fault Tolerance (BFT), can protect
the system from unexpected errors but they are usually expensive and thus are
hardly applicable to large-scale systems. My research explores new ways to get extremely
high levels of reliability for modern scalable storage systems with reasonable costs.

\iffalse
My research focuses on designing storage systems that can achieve
robustness, scalability, and efficiency simultaneously.

Large-scale storage systems are widely used in modern IT companies:
Google stores the majority of its data
in the Google File System (GFS); Facebook has built Haystack, an optimized distributed
storage for its photos; Amazon has built its own
key-value store Dynamo to meet
the latency and throughput demand of its customers.

However, the primary directive of storage--not to lose data--is hard
to carry out: disks
can fail in unpredictable ways, and so can CPUs and memories.
Concerns about robustness become even more pressing in large-scale
storage systems because the increasing complexity
of hardware and software not only creates greater opportunities for errors
but also introduces new types of errors, such as network partitions.
In practice, Google observes 1 corruption for every peta bytes of data written in
Bigtable. What is worse, the consequences of such corruptions are unbounded:
in the infamous 2008 Amazon outrage, a single bit flip caused the Amazon
S3 service to go down for 8 hours.

To protect a storage system from various kinds of errors,
academia has proposed multiple techniques, such as asynchronous replication
and Byzantine Fault Tolerance (BFT). However, stronger protection usually
incurs higher overhead. In large-scale production systems, such overhead
is magnified by the scale of the system and can thus introduce a significant cost.
\fi

My approach to building robust and scalable storage systems is
based on two key observations: first, data in storage systems is usually
big (4K to several MBs) while metadata is comparatively small (tens of bytes);
second, metadata, if carefully designed, can be used to validate data integrity.
These observations suggest an opportunity: by applying the expensive techniques
that guarantee robustness against a wide range of failures only to metadata,
which has little effect on scalability, it is possible to protect data as well
with minimal cost. This allows me to use aggressive fault tolerance techniques including Paxos and
end-to-end BFT at almost no cost compared to more traditional techniques
like piecemeal checksums and synchronous primary backup replication. In fact,
in some cases, providing strong end-to-end guarantees opens up new optimization
opportunities that allow our hardened systems to significantly outperform the
original systems on which they are based.

To achieve my goal of building robust and scalable storage systems, I have carried on my research
in three steps: I first started from a small-scale storage system
to study replication, the key component to achieve robustness in distributed
storage systems. My work demonstrates that for storage replication, it is possible
to achieve simultaneously the low cost of synchronous primary backup and the high availability
 of asynchronous Paxos replication. I then tried to apply strong protection techniques
to a large-scale storage system while retaining its scalability and efficiency properties.
It turns out that the key idea I developed in the first step is still applicable
here, though I also need to address specific challenges introduced by the scale of
the system. I finally tried to validate all my scalability and efficiency
claims of my previous prototypes, which turns out to be challenging because of a basic
problem: it is hard to find enough machines to run my prototypes at full-scale.
To solve this problem, I have built a library to emulate a large number of storage
nodes on a few machines. Once again, the key idea of my approach is to separate
data from metadata.

In the following sections, I will present these three steps in detail and then
my future work and other projects I have worked on.

\paragraph{Efficient and available storage replication~\cite{Wang12Gnothi}.}
I started my research by studying replication,
because replication is the key technique to guarantee data
durability and availability and it is one the main factors that determine the
cost of the system. As a starting point, I carried on my research in a small-scale
storage system for its simplicity.

Different replication protocols provide different guarantees with different
costs: synchronous primary backup uses $f+1$ replicas to tolerate $f$ crash failures but
it usually has to use a conservative timeout to perform accurate failure detection, which hurts
the availability of the system; asynchronous replication like Paxos does not rely on accurate
failure detection, but it increases the replication cost to $2f+1$.
My work targets at the following question: can one write data to $f+1$ nodes and still use
short and potentially inaccurate timeouts without risking correctness? This is well-known to be impossible in the general case, but in storage
systems, the key idea of separating data from metadata allows me to closely approximate
this goal: by replicating metadata with Paxos and using metadata to identify correct data during
failure and recovery, it is sufficient to replicate data on only $f+1$ nodes.
I have built a small-scale storage system, Gnothi, based on this protocol and the experiments
confirmed the expected benefits.

A natural follow up question to this project is whether it is possible to replace Paxos with more robust
replication techniques, e.g. BFT, to achieve stronger guarantees. This is part of my on-going research
and some of its findings have already been successfully used in my second project, suggesting that
this is a promising research direction.

\paragraph{A robust and scalable block store~\cite{Wang13Robustness}.}
My next step was to investigate how to build a large-scale storage system with strong guarantees.
The resulting prototype, called Salus, provides functionalities similar to those of Amazon's popular Elastic Block
Store but with unprecedented guarantees in terms
of consistency, availability, and durability in the face of
a wide range of server failures (including memory corruptions, disk corruptions, CPU errors, etc.).
Since many scalable systems have already been deployed in industry, my approach was to start from existing
mature scalable designs and investigate how I could improve their robustness while not hurting their scalability.

Existing scalable storage systems usually give up certain robustness properties for scalability, but my work
demonstrates that such trade-offs are not necessary. For example,
scalable systems shard data and write data to different shards in parallel to achieve scalability. This
approach, however, does not provide the ordering guarantees between writes, and such guarantees are essential to
the correctness of certain applications, e.g. a block store. I addressed this problem in Salus by separating data transfer from
metadata transfer: data is processed in parallel while metadata, which carries information about
which data can be committed, is processed sequentially to provide ordering guarantees.
This approach achieves simultaneously scalability and ordering guarantees for write operations.
Also, large-scale
storage systems are usually composed of multiple layers, with data replication performed at the lowest
layer. However, using approaches similar to the one used in Gnothi to enhance the robustness of the
replication layer is not enough since
middle layers are not replicated and can become single points of failures. My work shows that
replicating middle layers in Salus can improve not only the robustness of the system, but also its
efficiency when disk bandwidth exceeds network bandwidth.

\paragraph{An emulator for evaluating large-scale storage systems on small-to-medium infrastructures~\cite{Wang14Exalt}.}

A basic tenet of sound system research is to validate a design by implementing a prototype
and running experiments on it, but this turns out to be hard for a scalable storage system,
since to do so one needs access to thousands of machines and tens of thousands of disks.
In the Salus project, I managed to evaluate my prototype with 200 machines, which
is useful but still not fully satisfying since my design targets at thousands of machines.

To solve this problem, I have designed an emulator, Exalt, that uses data compression to run a large number
of storage nodes on 100 times fewer physical machines. To achieve efficient compression,
I leveraged the observation that the behavior of storage systems often does not
depend on the actual data being stored and developed Tardis, a synthetic data format
that allows applications to quickly separate data from metadata and achieve high rates of data compression.

%but rather only on the metadata: under this
%assumption, we can perform testing with artificial data to achieve a very high compression ratio.
%However, the challenge is that modern storage systems usually mix metadata with data and existing
%compressing algorithms are either inefficient or inaccurate on such mixed pattern. To solve
%this problem, I have designed a data pattern Tardis and the corresponding algorithm to efficiently
%distinguish data from metadata at multiple layers of the system.

By applying Exalt to existing large-scale storage systems, I was able to improve the
scalability of a mature storage system by an order of magnitude compared to its default
configuration and we also found a number of problems that are not observable at small scale.
One of my goals for Exalt was to investigate the scalability of Salus. Indeed, my initial
experiments indicate that the current implementation of Salus may have some scalability
bottlenecks: I am currently working on fixing them.

\paragraph{Future work} Most of my previous projects yield interesting questions
that I hope to answer in the future:

First, as observed in the Gnothi project, correctness of synchronous replication, which is
still the most popular replication protocol in practice, depends
on the accuracy of timeouts or, in other words, on the timely delivery of
messages. This assumption may be a security vulnerability instead of just an availability
concern, because attackers can break this assumption by using denial-of-service (DoS) attacks
to temporarily block message transfers and they can potentially make benefits from the subsequent errors.
The risk rises as more people are moving to shared cloud platforms like
Amazon since the sharing of machines and networks gives attackers
more opportunities. I will investigate whether I can make a realistic attack
in a real setting and how severe the consequence can be.

Second, for performance reasons, many storage systems including Salus have chosen to
provide weaker consistency guarantees than those offered by the traditional linearizability
model. However, the application programmers that need to store data on such storage
systems sometimes are unaware of such weaker models, risking the correctness of their applications.
To help illustrate such problems, I will try to find a systematic approach to check
whether an application works correctly under weaker storage models.

Third, tools like Exalt are useful for checking the correctness and performance of an implementation.
Following Exalt, I want to take it one step further: can we automatically find the 
performance bottleneck of a given system and tell its root cause? My intern project at
Facebook already shed some light on this problem: it can
find---in a complicated system---which component is the bottleneck. And I plan to push
it further to find its root cause.

\paragraph{Others}
Being a member of an excellent system research group, I have had the opportunity to contribute to several other
projects led by other graduate students. In particular, I have worked on the UpRight project~\cite{Clement09Upright} to provide a practical framework
for Byzantine Fault Tolerance and the Eve project~\cite{Kapritsos12All} to perform state machine replication for multithreaded
applications. Both projects provided useful insight for my research in large-scale storage systems.
Currently I am working on how to make multiple replicated state machines communicate with each other.

\bibliographystyle{abbrv}
  \bibliography{LasrBibtex}

\end{document}
