\section{Limitations and applicability}
\label{sec-limitations}

\sys relies on a number of assumptions to provide high-degrees of node
colocation.  This section reviews these assumptions and discusses
which of them can be weakened to widen the applicability of our approach.

\sys is primarily designed to evaluate I/O-intensive applications like
distributed file systems storing large
files~\cite{ghemawat03google,Shvachko10HDFS} or key-value stores with
relatively large values~\cite{calder11windows,Nightingale12FDS}. Applications
that are not I/O-intensive or store small values cannot benefit
significantly from \scheme, as they gain little by compressing data.
In Section~\ref{sec-evalhbase} we explore
in more detail how the size of the value in a key-value store
affects the colocation ratio of \sys.

%For CPU-intensive applications like Map-Reduce or key-value stores with small key-value pairs,
%\sys is not efficient since it gains little by compressing data.
%DieCast has tried to address
%the problem of evaluating CPU-intensive applications by using time warp (Should we move DieCast
%from related work to here?)

Our current implementation of \sys makes two additional assumptions: first,
that the target application does not modify the data written by the
clients, although it can split the data and insert metadata; and
second,   that experiments are not sensitive to the
contents of the data, so that benchmarks can operate with synthetic data.

While these assumptions hold for the systems we have so far applied
\sys to, they are not fundamentally required for \sys to be
applicable.  We consider below some popular techniques that violate
these assumptions and discuss how our implementation of \sys can be
modified to work in conjunction with them.

% Note that our current
% implementation does not support these techniques, as they are not
% employed by the systems we have been working with.

%Most of them require certain modification
%to the application though. Note that none of them are implemented yet
%and we consider them as future work.

\smallskip{\bf Encryption and erasure coding} Both techniques involve
encoding data into a different format, violating the assumption that
client data is immutable. To handle these cases, \sys would compress
the data using \scheme\ {\em before} encoding it, and then add {\em
  filler bytes} as necessary to match the length of the (encoded)
original data. Filler bytes would use the same format as \scheme
(making them highly compressible), but with a different flag sequence
(so that they can be distinguished from real data).  When reading the
data, \sys would remove the filler bytes before performing decryption
and then decompress the \scheme sequence to obtain the client data.


% we propose the
% following approach. The client will use the \scheme format, but \sys
% performs \scheme compression {\em before} encoding the data and then
% adds some garbage bytes so that the encoded data has the same size as
% if it were not compressed by \scheme.  These garbage bytes use the
% same format as \scheme, but with a different flag sequence. That way,
% it is highly compressible but distinguishable from the real data. When
% data is read, \sys removes the garbage bytes before decryption is
% performed. After decryption, \sys decompresses the \scheme sequence to
% create the client data.

\smallskip{\bf Deduplication} Deduplication compares the contents of
different data units (files, chunks, etc.)  to eliminate duplicates
and, by making execution dependent on the actual data, violates our
second assumption. Indeed, deduplication schemes that directly compare
the units' data are incompatible with \sys. However, \sys can still be
applied to deduplication approaches that only compare hashes of data
units. \sys would first compute the hash of the client data and then
replace the client data with data formatted using an extended version
of \scheme, which inserts the hash of the data unit between the flag
and the marker. The deduplication module could then use this hash
directly to identify duplicate data units.
%For approaches that compare actual data when a hash match is found, we do not know a way to
%apply \scheme.

\smallskip{\bf Compression} If the system being tested already uses
compression, it is in general not possible to use synthetic data at
the clients, since the compression ratio depends on the actual
data. If, however, compression is performed only at the client side,
\sys could apply a technique similar to the one used to handle
encryption and erasure coding: the client would first compress the
real data to determine its compressed size, then create synthetic
(\scheme) data, compress it using \scheme' compression and finally
append the right amount of filler bytes to match the length of the
(compressed) real data.

%However, if average compression ratio
%is known, we can approximate the original system by letting clients write synthetic data
%and using the similar approach as in the encryption or erasure coding case with an estimated
%compressed size.

\smallskip{\bf Data sensitive applications} Many applications use
SQL-like languages for their queries.  The execution of these
queries depends on the data, since SQL predicates can be expressed as
a function of the data. The rest of the data, which does not affect
the processing of the queries, can be synthetic. The efficiency of
\sys in these cases depends on the ratio of sensitive to non-sensitive
data.

