\subsection{Case Study: Cassandra}

Cassandra~\cite{lakshman09cassandra} borrows many elements from Amazon's Dynamo storage system.
Unlike HDFS and HBase, Cassandra does not rely on a single metadata
node to manage namespace and membership. Instead, Cassandra
incorporates a distributed hashtable (DHT) protocol for these tasks, eliminating
the single scalability bottleneck. Although one might expect this design choice
to result in better scalability, our experience applying \sys to Cassandra shows that this expectation
is unfounded: the scalability problems of Cassandra appear at
the scale of hundreds of nodes, even before we turn on \scheme compression.

We find two major problems preventing Cassandra from scaling up to tens of thousands
of nodes. First, if multiple nodes concurrently join an existing Cassandra cluster, there
is a non-negligible possibility that some of them may fail. This problem was confirmed
by the developer of Cassandra behind the pseudonym \emph{geekatcmu} who, when asked about the
issue, wrote to us that: ``you have to wait for each node to
bootstrap before starting the next one''~\cite{Cassandracommunication}. As a result, starting a cluster with tens of thousands of nodes
may take prohibitively long, both in practice and in experiments, because a node usually
takes several minutes to stabilize. This problem is actually rooted in the design of DHTs,
where it is usually hard to provide a consistent namespace when a large number of nodes
are joining or leaving.

The second problem is that the number of threads per node grows quickly with the
scale of the cluster. This
is because, in the current implementation, each Cassandra node creates four sockets---incoming
and outcoming sockets for both data and metadata streams---to every other
Cassandra node, and assigns a separate thread to each socket. In our experiments,
the number of threads created quickly hit the system limit, which we didn't have sufficient
privileges to change. Even if we had, the large number of threads would have created
a memory problem: given that the stack of a thread takes at
least 128KB\footnote{This number may vary depending on hardware architecture and operating system.}
of memory in the JVM,
in a cluster of 10,000 nodes, each node would need at least 5GB of memory
just for creating threads. This problem may be addressed with an
implementation using Non-blocking I/O (NIO), which creates a single thread for
all sockets.

In sum, for the current implementation of Cassandra, trying to deploy a cluster with more
than 1000 nodes is not suggested, as the developer
of Cassandra behind the pseudonym \emph{geekatcmu} remarked when we asked him to comment
on our findings~\cite{Cassandracommunication}: indeed, the scalability
of Cassandra is so limited that these problems can be observed even without the help of
\sys.
