\subsection{Case study: HDFS}

HDFS~\cite{Shvachko10HDFS} is an open source implementation of the
Google File System (GFS)~\cite{ghemawat03google}.  Each HDFS cluster
contains a single NameNode that stores the file system namespace
information and several DataNodes that store the file contents.  Each
file is split into multiple blocks and each block is stored on three
DataNodes. When a client creates a file or adds a block to an existing
file, it first contacts the NameNode, which responds with a list of
the DataNodes that will store the new block. The client can then
directly write the block contents to these DataNodes.

%Fortunately, HDFS developers have used statistics collected from a 4,000-node
%cluster to estimate the scalability of HDFS and this allows us to
%validate the accuracy of \sys. We do find matching numbers as well as
%problems that are not described in the document.

We mainly focus on write workloads since they are more likely to cause
scalability problems. Unless otherwise specified, in our experiments
each client creates a file in its own directory, writes 192MB of data
to it (as suggested by the HDFS developers in their white paper on how
to test HDFS' scalability~\cite{HDFSScalability}),
closes the file, and then starts a new file. This workload achieves
the highest scalability among all workloads that we tried; Section
\ref{performance-degradation} describes the performance problems
caused by other workloads.  We use a block size of 128\,MB and the
default three-way replication (again, as suggested
in~\cite{HDFSScalability}). Unless otherwise specified, we run
DataNodes and clients in emulated mode while the NameNode runs in real
mode.



For the above workload, \scheme achieves a compression ratio of over 500,
but in practice the degree of colocation is limited by CPU utilization:
% {\color{red}(More than that. But since CPU
%becomes a bottleneck, we can only colocate 100)},
we colocate 100 DataNodes on one machine and achieve
an effective write bandwidth of 10\,GB/sec on a disk with 100\,MB/sec
physical bandwidth. For experiments with modest storage capacity
requirements, we can increase the write bandwidth to 20\,GB/sec by
writing to tmpfs, an in-memory file system. Our largest configuration
experiment uses 192 server machines, to emulate an HDFS cluster with
19,200 DataNodes.

%A disk with 100\,MB/sec physical bandwidth can achieve an effective write bandwidth of
%10\,GB/sec. For experiments with modest storage capacity requirements, we can further increase
%the write bandwidth by a factor of 2, by writing to tmpfs, an in-memory file system. In these cases,
%each machine can effectively write 20\,GB of data per second.


%We can achieve a maximum of 20GB/s (before compression) write bandwidth on a single physical
%machine. It can be either used to emulate 100 small DataNodes with two 1Gbps
%NICs and two disks, or used to emulate 10 big ones with two 10Gbps NICs and
%20 disks.

\subsubsection{HDFS throughput scalability}

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/hdfs_throughput.pdf}}
\caption{\label{graph:hdfs_throughput_scalability} HDFS throughput scalability.}
\end{figure}


In some sense, the result of our experiments to test the scalability
of HDFS is not surprising: the bottleneck of the system is the centralized
NameNode. What is perhaps surprising is that, thanks to \sys, we were able
to increase the system throughput by an order of magnitude without
changing the architecture of the system.




% Using \sys we performed a series of experiments to test the
% scalability of HDFS. During this process we encountered several issues
% that limited the system throughput. We verified that these issues were
% indeed creating a bottleneck by fixing them and achieving a higher
% system throughput each time.  Not surprisingly, the centralized
% NameNode is the bottleneck of the system. What is perhaps surprising
% is that, without changing the architecture of the system, we were able
% to increase the system throughput by an order of magnitude.

Figure~\ref{graph:hdfs_throughput_scalability} reports the results of
our experiments.  On the x-axis we increase the number of
DataNodes and on the y-axis we plot the aggregate throughput of the
system, as observed by the clients. The vertical arrows represent the
process of fixing an issue that was limiting the system throughput.
When an issue is fixed, we rerun the experiment for the same number of
DataNodes, to verify that the system indeed achieves a higher
throughput. For reference, we also plot a straight line that shows the
ideal throughput achievable by a perfectly scalable system that
leverages the full bandwidth of all disks (100\,MB/s).

%In this experiment, we keep adding more DataNodes until the system is saturated (need to discuss collocation).
%During the procedure, we find several bottlenecks and after fixing them, we can get better scalability.
%The result is shown in Figure~\ref{graph:hdfs_throughput_scalability}.

Our first experiment shows that the original HDFS system quickly
saturates at around 37\,GB/s. We discovered through profiling that the
default number of RPC threads at the NameNode was limiting the
achievable throughput; increasing the number of RPC threads from 10 to
256 allows the NameNode to achieve much higher throughput.

%At first attempt, the system is saturated pretty soon with 12 physical machines as DataNodes.
%Our analysis shows that the reason is the small number of RPC threads on NameNode (default 10).
%Therefore, we increase it to 256 as recommended.

After fixing the first issue, the system saturates at around
286\,GB/s. Further profiling showed that the I/O accesses at the
NameNode were becoming the system bottleneck. More specifically, the
NameNode debug information was being stored on the same disk as its
log file, which, of course, hurts the speed of logging.
%This prevented both files from sequentially accessing the
%disk, thereby introducing a large number of {\tt seek} calls that
%reduced performance.
Our solution was to write the debug information
to tmpfs instead, thereby making sure that the NameNode log file was
accessing the disk with full speed. Alternatively, one could store the
debug information on another disk, if one were available.

%At second attempt, the system is saturated with 96 physical machines as DataNodes. Our profiling
%shows that the I/O at NameNode is the bottleneck. To achieve better I/O, we move log file into
%tmpfs and keep data file still on disk and this can increase throughput by 38\%. Since log file
%is for debugging and does not affect the functionality of the system, we believe this is also feasible
%in practice. On the other hand, it should also be helpful to put log file and data files on different
%disks. But since our machine is only equipped with one disk, we cannot validate that.

Applying the second fix increases the system throu\-ghput to
418\,GB/s, at which point the system again becomes saturated.  This
finding is consistent with the scalability assessment of the HDFS
developers that with each client having a write throughput of
40\,MB/s, ``10,000 writers can produce enough workload to saturate the name-node''~\cite{HDFSScalability}, which
corresponds to an aggregate throughput of 400\,GB/s. While this
assessment was obtained using extrapolation, we consider it reasonably
accurate since it is based on a large deployment of 4,000 nodes.

Since we suspected disk I/O to be the system bottleneck at this point,
we performed a final experiment in which disk {\tt sync} is disabled
and the NameNode writes all logs to tmpfs.  The purpose of this
experiment is to project the scalability of the system in the presence
of a fast storage medium (e.g.~NVRAM, SSD). In this configuration, the
system throughput increases by a further 42\%, to a maximum throughput
of 595\,GB/s.

%Our experiments suggest that, in the absence of faster storage media,
%the maximum throughput achie\-vable by HDFS is around 400\,GB/s.

Of course, we do not claim that \sys's throughput predictions are
perfectly accurate; on the contrary, we acknowledge the limitations of
running a system whose resources are partially emulated.  Nonetheless,
the benefits of \sys are clear: it allowed us to test the system's real
code and identify and resolve performance issues at a scale that would
have otherwise remained the sole province of a few large companies.

%. Iterating through this process allowed us to improve the performance of
%HDFS by an order of magnitude

%Then we did not get higher throughput by adding new machines. Since we suspect the I/O is
%still the bottleneck, we emulate a very fast disk by disabling sync calls
%in NameNode and moving data also into tmpfs. That can increase the throughput by 50\%, which
%means I/O is still the bottleneck. Since the throughput bottleneck is the I/O system in NameNode,
%it should be worthwhile using fast devices like persistent memory on NameNode.

%The estimated number in HDFS-Document is that ``assuming each client has a write throughput
%of 40MB/s, the system can support no more thant 10,000 clients''. This means an aggregate
%throughput of 400GB/s and this is consistent with what we get from \sys.

%Some of the problems we show might be obvious to HDFS experts. However, the purpose of these
%experiments is to show that \sys is accurate, not to show that HDFS has unknown problems.
%Without \sys, we will need many machines to perform these experiments.

\subsubsection{HDFS capacity}

%\begin{figure}[t]
%\centerline{\includegraphics[angle=0, width=0.5\textwidth]{graphs/hdfs_space_scalability.pdf}}
%\caption{\label{graph:hdfs_space_scalability} HDFS space scalability.}
%\end{figure}

\begin{table}[t]
\begin{footnotesize}
\begin{center}
  \begin{tabular}{| c | c | c | c | c|}\hline
    Memory size      &  1GB & 2GB & 4GB & 8GB \\ \hline
   HDFS Capacity    & 1.15PB     & 2.35PB & 4.76PB & 9.49PB    \\ \hline
  \end{tabular}
\end{center}
\caption{\label{graph:hdfs_space_scalability} HDFS space scalability as a function of NameNode memory size.}
\end{footnotesize}
\end{table}



The capacity of an HDFS cluster is limited by the amount of memory
available to the NameNode.  In this experiment, we try to measure how
much memory the NameNode needs per 1\,PB of HDFS storage space.
%There
%is a specific challenge for this experiment from Java: it's hard to measure the actual memory usage of a Java process
%since released memory space may not be garbage collected immediately and there is no way
%to force a full garbage collection. We used an appoximate approach: we set a certain max
%heap size for the NameNode process and then keep writing data to HDFS until we observe
%significant garbage collection overhead on NameNode (throughput drops to less than 30\%),
%which is an indicator that memory is close to full. We did try to drive the NameNode until we see
%an OutOfMemoryException, but that takes too long as a result of the long-tail effect.
Table~\ref{graph:hdfs_space_scalability} shows that the capacity of HDFS grows linearly with
the amount of memory at the NameNode. In particular, 1\,GB of NameNode
memory can support about 1.2\,PB of raw HDFS space (400\,TB of data,
since blocks are 3-way replicated). This result is close to the
estimation of HDFS developers: ``1\,GB of metadata $\approx$ 1\,PB of
physical storage''~\cite{HDFSScalability}.

Using \sys allows us to perform this experiment using only 16\,TB of
disk storage, while a real deployment would require a total of 10\,PB
of disk storage.

\subsubsection{Performance degradation in HDFS}
\label{performance-degradation}


The above experiments use a workload that provides high
scalability. Other workloads are not as accommodating. We evaluate two
such workloads that can drastically degrade the performance of HDFS.

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/hdfs_big_directory.pdf}}
\caption{\label{graph:hdfs_big_direcotry} HDFS throughput degradation as the size of directories increases.}
\end{figure}

In the first workload, all clients create files in the
same directory. As shown in Figure~\ref{graph:hdfs_big_direcotry}, the
aggregate system throughput steadily decreases as more files are
created. Further profiling allowed us to identify the cause of this
behavior in the source code: the NameNode uses an ArrayList data
structure to maintain an alphabetically sorted list of the files
inside a directory. Adding an element to a sorted array is an O(N)
operation, since it requires a suffix of the sorted array to be shifted
by one position.  Therefore, the bigger the directory, the longer it
takes to add a file to it.
%{\color{red}Facebook engineers have confirmed that this issue arises in their
%large-scale HDFS deployment~\cite{FBcommunication}}.
As a double check, we verified that, if we limit the number of files written to each
directory, creating more files does not cause a performance degradation.


%We bypass this problem by creating multiple directories and scattering files to different
%directories and as shown in the figure, we do not observe degradation any more. Source code level
%patches like using a tree like structure to replace the array should also be feasible.

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/hdfs_big_file.pdf}}
\caption{\label{graph:hdfs_big_file} HDFS throughput degradation as the size of files increases.}
\end{figure}

In the second workload, one client creates a file and keeps
appending data to it. As shown in
Figure~\ref{graph:hdfs_big_file}, once the file grows sufficiently large,
the aggregate system throughput decreases steadily. Note that in this
experiment there are only a few clients and the system is not fully
saturated, which accounts for the fact that the aggregate system
throughput is lower than in the previous experiment.  Profiling
led us to the cause of the problem: before the
NameNode creates a new block for a file, it needs to calculate the
file's length.  It does this by scanning all existing blocks and
computing the sum of the lengths of all blocks.  This, too, is an O(N)
operation. We fixed this problem by adding a length field to each file
and updating the field when a block is added or updated. As
Figure~\ref{graph:hdfs_big_file} shows, after applying our fix the
system throughput no longer decreases as the files grow in size.

As before, \sys allows us to identify these performance issues without
requiring access to a large amount of disk storage. Running this
experiment in a real deployment would require 900\,TB of disk storage;
with \sys, we only need 1.5\,TB.

\subsubsection{DataNode scalability}
\label{sec:datanode-scalability}

% This experiment evaluates the DataNode's performance as its storage
% space grows.

As disk capacities increase every year, and most HDFS deployments use
multiple disks per DataNode, it is important for the DataNode's
performance to not decrease as more storage capacity is added to it.
While running HDFS in hybrid mode---keeping some DataNodes real---we
observed uncommonly high latencies for some requests. Our profiling
indicated that the source of the problem was a disk scan that the
DataNode periodically performs on all its blocks.
Figure~\ref{graph:hdfs_blockscan} shows that the time a real node takes to
perform this scan increases linearly with the number of blocks stored
on the disk.  Unfortunately, this scan is a blocking operation,
preventing write requests and heartbeats from being sent or received.
As the duration of this scan becomes longer, it can have serious
performance consequences, including timeouts at the clients or even
missed heartbeats, which would cause unnecessary re-replication of the
DataNode's data. This issue is confirmed by Facebook engineers; to
address it, they modified HDFS to allow the block scan to be
performed in parallel with heartbeats and write requests~\cite{FBcommunication}.

%This experiment evaluates the DataNode's performance as its storage
%space grows.

%We observe a problem: the block scan process in the DataNodes can take a long time if
%the disks contain many blocks. What is worse, the block scan task can block other tasks
%like write operation or heartbeat. Therefore, it may cause timeouts at the clients or even
%missed heartbeats at the NameNode, which can cause recovery.

%In our previous experiments, all DataNodes run in emulated mode, since
%the focus of the experiments is the scalability of the
%NameNode. However, running a DataNode in emulated mode means that we
%can not accurately predict how long operations such as the block scan
%would take in a real deployment. To accurately measure the scan time,
%we allow one DataNode to run in real mode.
%Figure~\ref{graph:hdfs_blockscan} confirms that the scan time grows
%with the number of files on the DataNode's disks, in an approximately
%linear fashion.

While reproducing this problem is easy, triggering it in a real
deployment would require 8\,TB of disk storage on a single DataNode;
using \sys, we triggered this problem using an 80\,GB disk. After
identifying the problem, we reproduced it on a real DataNode with
8\,TB of disk storage (Figure~\ref{graph:hdfs_blockscan}).

Note that although it could be triggered with only a few machines,
this problem would be hard to identify and tedious to reproduce during
debugging, since it would take at least a few hours for the latency
increase to be observable. \sys's time compression helps in this
case. If emulated nodes have exclusive access to a machine's
resources, the system works at an accelerated speed: in this example,
the problem would manifest itself in a matter of minutes.


%Additionally, using \sys one can
%trigger this problem 100 times faster than a real deployment, if an
%emulated node has exclusive access to a machine's resources.

%{\color{red} Note that
%this problem can be triggered with
%only a few machines, but it takes a few hours before the latency grows
%long enough to be observable, and this might be the reason why it escaped
%previous testing. \sys, on the other hand, allows us to quickly trigger the problem.}


\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/hdfs_blockscan.pdf}}
\caption{\label{graph:hdfs_blockscan} Time of the block-scan procedure on a DataNode, as the number of blocks increases.}
\end{figure}

%\subsubsection{Other problems}
%Our testbed machine has a limit of 8092 open files per process. And since each DataNode or
%client creates a socket to the NameNode, the NameNode stops accepting DataNodes
%or clients at some point. What is worse, the NameNode can crash because it cannot open
%necessary files. On the one hand, the NameNode should reserve some limit for its own. On
%the other hand, it may not be a good design to maintain a socket for each DataNode and client.

%We also observe slow starts of DataNodes especially when multiple DataNodes are collocated on the same
%machine. Our profiling shows that they get stuck in a SecureRandom generator. The document of SecureRandom
%suggests that ``it reads from /dev/random and it may take a long time to wait for enough entrophies to occur''.
%That explains our problem: since collocated DataNodes share the /dev/random device, some of them need to wait
%longer.



