\subsection{HBase}
\label{sec-evalhbase}
HBase~\cite{hbase} is a distributed key-value store built upon HDFS.
The basic data unit of HBase is a region, which corresponds to a
continuous key range in a table.  An HBase cluster includes a Master,
responsible for assigning regions to different region servers. Client
requests to a specific region are directed to the corresponding region
server. The region server processes write requests by logging them to
HDFS while also keeping them in a memory buffer called
\emph{memcache}. When the size of the memcache exceeds a threshold,
the region server writes the whole memcache into a checkpoint file on
HDFS, so that it can garbage-collect the previous log files. A
checkpoint is also taken if the total memory usage across all regions
exceeds some limit; in this case, a region server checkpoints the
region with the largest memcache.  When necessary to free up space,
the region server performs \emph{compaction} to merge several
checkpoints.  In essence, a region server transforms the random access
patterns of a key-value store into the append-only interface of
HDFS. When a region grows large, HBase splits that region into two for
load balancing; conversely, if two adjacent regions are too small,
they are merged into one. Apart from the Master and the region
servers, an HBase cluster incorporates a ZooKeeper ensemble that
performs lease management.

We evaluate HBase using a simple workload that can achieve a high
throughput: we create enough regions so that each region server stores
about 10 regions. We start multiple clients that randomly write
key-value pairs to those regions. The key size is 4 bytes and the
value size is 1\,MB. To measure the maximum achievable throughput, we
disable split, merge, and compaction operations---to ensure that split
and merge operations do not occur, we limit the number of key-value
pairs written to a region. We plan to study the effects of split,
merge, and compaction in the future.

Our experiments keep the HBase Master, HDFS NameNode and
ZooKeeper cluster real, while all DataNodes and region servers are
emulated. In each experiment we assign 500\,MB of physical memory to
region servers. However, we perform in-memory compression, which
effectively increases each region server's memory to 16\,GB.

%\yangI{More details about HBase benchmark: How large is the key/value pair? etc}

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/hbase_throughput.pdf}}
\caption{\label{graph:hbase_throughput} HBase throughput scalability.}
\end{figure}

Figure~\ref{graph:hbase_throughput} demonstrates the throughput
scalability of HBase as the number of available region servers increases.  Note
that the raw throughput of HBase is much lower than that
of HDFS (see Figure~\ref{graph:hdfs_throughput_scalability}). This is due to two reasons: first, HBase needs to write data twice to
HDFS---once for logging and once for checkpointing. Second, region
servers are relatively more CPU-intensive than DataNodes and therefore
cannot benefit as much from colocating multiple nodes on the same
machine.

% This results in a lower aggregate throughput for the same
% amount of physical resources.

HBase can achieve a maximum write throughput of about
80\,GB/s. Considering that HBase writes data twice, this translates to
a 160\,GB/s throughput at the HDFS layer, which is about 40\% of the
maximum throughput achievable by HDFS. Our profiling shows that the
{\tt sync} calls to disk at the HDFS NameNode are still the bottleneck
of the system.  The reason for this 60\% performance loss is that the
region servers perform many additional directory operations, other
than simply creating and closing files.  For example, when a log file
is garbage-collected, the region server first moves it to an ``old
log'' directory as a backup and only deletes it after some time has
elapsed.
%Similarly, when performing a checkpoint, the region server first writes the checkpoint file to a
%temporary directory and only after the checkpoint is complete does the region server move
%it to the checkpoint directory. This is necessary to ensure consistency in
%case a region server fails while taking a checkpoint. Such move and delete operations
%impose a heavier load on the NameNode than the simple workload
%we investigated in earlier experiments.


\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/hbase_region_size.pdf}}
\caption{\label{graph:hbase_moreregions} HBase aggregate throughput as the number of regions
per GB of memory changes.}
\end{figure}

In Figure~\ref{graph:hbase_throughput} each region server has 16\,GB
of memory and holds 10 regions: since the default maximum size of a region
is 200MB, all data can be cached in memory. Our next experiment
evaluates how the performance of HBase is affected when we decrease
the memory size per region. As shown in
Figure~\ref{graph:hbase_moreregions}, HBase throughput drops
significantly when the number of regions per GB of memory exceeds 7,
which translates to about 150\,MB of memory per region. In other
words, in order for HBase to work efficiently in a large-scale
deployment, each region server must be equipped with a considerably
large amount of memory: enough to hold at least $\frac{3}{4}$ of its
on-disk data.  The reason for this performance drop is that region
servers flush their regions to HDFS files when their memory usage
exceeds a certain threshold. If the number of regions per GB of memory
is high, this will create a large number of small files on HDFS, which
stresses the HDFS NameNode. Resolving this problem
requires a significant redesign of HBase, which is beyond the scope
of this dissertation. Note that this performance drop is only
observed at large scales, since small deployments cannot generate
enough load to saturate the HDFS NameNode.


%This serves as a practical warning that for HBase to work efficiently, it's better
%to equip region servers with enough memory to hold at least 3/4 of the data. The reason
%for this degradation is that HBase flushes regions to HDFS when the memory usage
%is close to a threshold, so if the load is evenly distributed among different regions, each flush
%will generate a file with a size proportional to memory size per region. Therefore, less memory
%per region will result in a larger number of smaller files on HDFS, which incurs more file
%operations on HDFS NameNode and that, as we already show, is the bottleneck of HDFS. One
%solution to this problem would be to make access sequential, which ironically is against the original
%goal of Bigtable/HBase to build a random accessible storage over GFS/HDFS.
%This experiment cannot be performed with a few region servers, since a few region servers, even
%with a lot of regions per server, will not be able to stress HDFS NameNode to its limit.

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/key_value_size.pdf}}
\caption{\label{graph:key_value_size} Colocation ratio of \sys.}
\end{figure}

Our last experiment explores the effect of writing small values on the
colocation ratio achievable in \sys
(Figure~\ref{graph:key_value_size}). Not surprisingly, \sys achieves
high colocation ratios when the value sizes are large (around
500\,KB), but does not fare equally well for small values. It is
worth noting that the achievable colocation ratio for a given workload
is not infinite; eventually CPU utilization becomes the bottleneck.
For HBase, this happens at a colocation ratio of about 110.
%decreases when
%the key-value size decreases. As a reference, with 4KB key-value size, \sys can achieve
%48.7MB/s while the original HBase can achieve about 26.4MB/s.

%Before the previous experiment, we have tuned the number of regions on each region server.
%We have observed that the average file size on HDFS can decrease when the number of regions
%grows. The reason is that checkpointing can be triggered when the memory of a region server
%is close to full and when the number of regions grows, the size of memcache of each region
%becomes smaller. As a result, the checkpoint file becomes smaller. Small files will hurt the
%throughput of the system since HDFS NameNode pays a sync cost for each file create and
%close operation and recall that sync call is the bottleneck of NameNode.
%This means that administrators should not put too many regions on a single region server.

