\section{Emulator (future work)}

\subsection{Motivation}

It's hard to evaluate the design of a large scale storage system. The reason is simple:
we do not have enough resources to build a large enough testbed: the largest emulab
deployment has 2PB of disk space, which is two orders of magnitude smaller than
the current industrial deployment. This problem is magnified by two other reasons:
first, it is usually impossible to grab all machines in emulab since they are shared
by many users; second, in academia, we always hope to go beyond the state-of-the-art
and design systems for even larger scale. For example, we are designing a storage system
that is supposed to scale to exa bytes. As a result, the resources we can have now
is about three orders of magnitude smaller than our target environment, which makes
our result not convincing. Even for big companies, using a lot of resources for testing
is economically infeasible.

State-of-the-art: see related work in the next subsection.

We hope to have a tool that can reveal problems that are not likely to occur in small-scale
experiments. For this purpose, we are designing an emulator that can run a large number of
unmodified nodes on a small number of machines with much less storage space. The key observation
that makes this possible is that the behavior of storage systems are often (not always)
not sensitive to the actual data stored, so programmers often use all zeroes or highly regular
data when performing evaluation. This allows us to compress data with a very high ratio (1000:1
or more) and achieve our goal.

Of course adding compression and decompression changes system behavior. Our emulator
can run in three modes to accurately test different aspects of the system: first, in emulation only
mode, all processes run with compression. This mode can measure global resource
usage of the system including memory/network/disk
bandwidth usage, CPU cycles, etc, and reveal possible scalability
bottlenecks due to imbalanced resource usage. Second, in emulation-real hybrid mode, we run
most nodes with compression and a small number of nodes with no compression. This allows us
to stress test the subsystem of a single node, including its OS, RAID, etc.
Finally, in emulation-simulation hybrid mode, we run all nodes with compression but
we add a simulated latency to each network/disk operation so that we can estimate
the real throughput and latency of the system.

We're planning to first apply this tool to existing large-scale storage systems like HDFS,
HBase, and also Salus. Based on the result, we may modify the design to make it scale
to a larger space.

\subsection{Related work}

One common evaluation approach used in academia is to run the system in a small cluster and
use the result to predict its performance in a large cluster. For example, if the workload
is CPU heavy and the CPU of a node is 10\% utilized, then it is believed that the maximum
throughput is 10 times of the measured throughput. This approach has several limitations:
first, there are bugs that are more likely to show in high stress tests~\cite{Mogul06Emergent}.
Second, some overhead can grow with the system scale, possibly in a non-linear way:
sorting is a typical example. This makes the prediction hard. Furthermore, this
can change the system bottleneck since some overhead may grow faster than the others
when the scale of the system grows.
Third, system behavior may change when the number of nodes
increase: in HBase, the increasing number of tables can lead to more regions per storage
node and more frequent flushing and compaction. As a result, the I/O access per update
increases.

Another common approach used in testing is mocking/stubbing: to test a target component,
the programmer can create mocks/stubs for other components with significant simpler logic
and lower resource usage. Then the programmer can run the target component with these
mocks/stubs. This approach also has two limitations: first, the approach may miss some
problems caused by the collaboration of different components; second, for a complex
system, writing a mock/stub for a component can be hard, since one component may need
to communicate with different components with a complex pattern.

DieCast~\cite{Gupta08DieCast} uses time dilation~\cite{Gupta06Infinity} to run a large number of nodes with less
machines: it divides a period of time (one second in the paper)
into multiple pieces and let each process run in
one piece. This of course slows down the system but it predicts the actual throughput
by multiplying the measured throughput by the division factor. This approach has two
limitations: first, it does not solve the problem that there is not enough disk space,
so it cannot be applied to storage system evaluation. Second, it significantly slows
down the system and as we see, some large-scale experiments already takes a long time.

David~\cite{Agrawal11Emulating} makes it possible to evaluate a local file system without enough
disk space: it also makes the observation that storage evaluation is not sensitive to data,
so it only stores metadata of a local file system and completely discards data; and when a file
is read, it generates the data according to some rule. This approach can be used for local
file systems but cannot be applied to complex distributed storages: distributed systems
can store its own metadata as files on a local file system and such metadata is critical
to the system's behavior thus cannot be discarded. The fundamental reason is that metadata
of upper level may be stored as data on lower level, so discarding data at lower level
is not acceptable. Instead, our approach seeks to compress data with no data loss.


\subsection{Design}

\subsubsection{Efficiently compressing data}

Though we assume user data are all zeroes or highly regular, the system itself often adds metadata,
which makes compression more complicated. For example, in HBase, if a user writes some key-value pairs
with all zeroes, the system adds timestamps, checksums, region descriptors, etc, to the key-value
pairs and our compressor should be able to catch such patterns.

We hope our compressor to be 1) data lossless: as stated before, some data are critical to
the system's correctness; 2) low runtime-overhead: it should not become a CPU bottleneck; 3) high
compressing ratio: we hope more than 1000:1 assuming the majority of data is all zeroes.

There are a lot of related works in compressing: dictionary based methods like Lempel-Ziv~\cite{Ziv77Universal,Ziv78Compression},
probabilistic based methods like Prediction by partial matching (PPM)~\cite{Cleary84Data,Moffat90Implementing},
and grammar-based methods like Sequitur~\cite{Manning97Identifying}.
They try to identify recurring patterns in data and replace them with a more
compact form.

These algorithms often need to make a tradeoff between compressing ratio and speed, while we hope
to have both. To achieve that, we can run a small-scale system in real mode, collect the traces,
and first perform some heavy work like dictionary or grammar generation offline, and then apply
such information in the large-scale emulation. The basic assumption here is that patterns are
the same in the small-scale running and the large-scale emulation.


\subsubsection{Transparently compressing data}
File: the challenging part is to support random write on compressed data. I'm planning to use the LFS
approach~\cite{rosenblum92lfs}, turning a random write into an append. Of course this makes read more complicated. Garbage
collection may not be necessary since anyway we assume there is a super high compressing ratio.

Network: compressing should be easy. Another feature we want is multiplexing, since we need to run
multiple processes on a single physical machine and these processes may want to bind to the
same port. We may need to provide a pseudo IP address to each process so that they do not
conflict.

Memory: the challenge is to find the appropriate interface to compress data. My plan is to add two functions:
compress and decompress to the target objects: before the object is accessed, call decompress, and after it is
updated, call compress. We may need automatic instrument technique here. We can analyze the memory
image from a small-scale real experiment to find the objects that need to be compressed and use
this information in the large-scale emulator run.

\subsection{Usage}

With compression and decompression, the system behavior in the emulator run may be different from
that of a real run. Here we present three modes in which we can test different possible problems.

\subsubsection{Pure emulation mode to find scalability bottlenecks}
In this mode, we run all nodes with emulation. This mode can reveal potential scalability bottlenecks.

Ideally, in a perfectly scalable system, when the number of nodes increases, the resource usage on each node should
not increase. In practical systems like HDFS, the design goal is that the resource usage on each DataNode does not
increase and the resource usage on the NameNode should increase linearly with the number of DataNodes.

The emulator can be used to verify whether the implementation actually satisfies the design goal: we can record
the resource usage (CPU cycles, memory consumption, disk and network bytes, etc) on each node; then we can
run the system with 100 nodes, 200 nodes, etc, till the target scale to see if there are any unexpected trend.
If some node experiences unexpected increasing resource utilization, it indicates a possible scalability bottleneck.

\subsubsection{Emulation-real hybrid mode to stress test a single node}
In this mode, we run most nodes with emulation and several nodes in real setting. In this way, we
can stress test the OS, JVM, network, etc of a single machine. This mode is similar to the
traditional ``stub'' approach, but it eliminates the need to create a stub, which can be quite
complex in a distributed system. If we know some meta nodes like NameNode or ZooKeeper are potential
bottlenecks, we should also run those nodes in this mode for an accurate measurement.

\subsubsection{Emulation-simulation hybrid mode to estimate real throughput and latency}
To estimate the real throughput and latency, we probably need to model the disk/network latency
as in previous works.

In order to make this work, there are several requirements: 1) CPU should not be the bottleneck. This is
possible if we run a large number of processes on the same machine. One solution is that we may try the time dilation
thing as in DieCast. The other solution is that we run this on the big computation clusters that have enough
cores but not enough disks. 2) Emulator disk accesses should not take longer than in real system. This is
again possible if we run multiple processes on the same machine. One solution is that we create a
RAM disk.

\subsection{Evaluation}

The questions we want to answer and how do we validate our answers:

\begin{itemize}

\item Does the system work in the expected scale? Are there any bugs or configuration problems
that only happen in large scale? Example: HDFS timeout. Method: use all 3 modes to run the
system. Validation: analyze the reason to see whether it makes sense.

\item Is the resource usage as expected? Example: for HDFS, when the size of the cluster
grows from 100 to 1,000, the CPU/network/memory/disk usage per data node should not increase
and the usage on the NameNode should increase ten times. Is this true? Method: run
pure emulation mode. Validation: if true, then good. If not, analyze the reason to see if it makes sense.

\item Does a single node have performance problems under stress test? Example: HDFS datanode seems
to experience performance degradation when having a lot of files. Method: run
emulation+real mode. Validation: if no problem, then good. If problems, analyze the reason
to see if it makes sense.

\item What is the maximum throughput and the corresponding latency? Method: run the emulation+simulation mode.
Validation: run the maximum real experiment and compare.

\item Where is the performance bottleneck? Example: for HDFS, probably the NameNode throughput.
Method: run the emulation+simulation mode. Some node should be saturated in CPU/network/disks.
Validation: we can run the detected node in real mode for verification.

\item What is the maximum size of the system? Example: for HDFS, probably determined by NameNode memory. 
Method: run pure emulation mode. Validation: we can run the bottleneck node in real mode
for verification.

\item How does recovery work? How long does it take? How does it affect latency/throughput?
Tune failure rate. Method: run emulation+simulation mode. Validation:?

\item How does new media like SSD change everything? Method: run emulation+simulation mode
and change simulation modeling. Validation: get some SSDs and run those nodes in real mode?

\end{itemize}


