\chapter{Experimental Results}

Testing is still ongoing.
We have completed a few small scale evaluations, presented below, and feel that the filesystem is relatively stable, even in a wide area, low bandwidth environment.
Further tests will evaluate Ringer's performance, including search, as a large distributed file system.
Using the Amazon Elastic Compute Cloud, we are planning on measuring the scalability of Ringer as it grows from from a few client nodes and one MDS to hundreds of clients and MDSs.

\subsection{Micro-Benchmarks}

\begin{table*}[t]
\begin{center}
%\begin{table}[h!b!p!]
\begin{tabular}{ c | c | c | c | c }
test & local & remote (DSL) & remote (GigE) & mixed (DSL) \\ 
\hline
{\tt put\_rnode()} & 223 & 14,575 & 746 & n/a \\
  {\tt get\_rnode()} & 472 & 35308 & 724 & n/a \\
  {\tt get\_read()} & 413  & 31103 & 501 & 33000  \\
  {\tt get\_write()} & 791 & 4403 & 524 & 786  \\
  {\tt end\_read()} & 498 & 33471 & 530 &  57000 \\
  {\tt end\_write()} & 874 & 5403 & 525 &  829 
\end{tabular}
\label{micro}
\caption{Metadata micro-benchmarks (in microseconds).}
\end{center}
\end{table*}

We first present several micro-benchmarks. 
%These are all performed with a single client and MDS running on the Macbook via the loopback interface.
We measure the time to stat a file and read the file's metadata, acquire a read lease, acquire a write lease, end the read lease, and finally end the write lease.
All of these require communication with a separate MDS process. 
%The {\tt end\_read()} and {\tt end\_write()} calls are asynchronous and return without waiting for a response from the MDS however.
Each time is the average time in microseconds measured using the {\tt gettimeofday()} system call of $1000$ runs.
Our results are presented in Table \ref{micro}.

{\em local} refers to a client communicating with a MDS over a loopback device. 
{\em remote (DSL)} refers to a client and a remote MDS accessed via a DSL connection.
{\em remote (GigE)} is the same but using a gigabit ethernet connection for the client/MDS communication.
Finally, {\em mixed (DLS)} refers to local client speaking to a local MDS which in turn communicates with a remote MDS via DSL.

The major differences within columns are caused by the synchronous nature of the three {\tt get*()} calls, which all wait for a response from the server before returning.
The other methods ({\tt end\_read()}, {\tt end\_write()} and {\tt put\_rnode()}) all return immediately after any client-side processing.

%From these results, we believe that, for a client, while using a remote MDS over a DLS connection is not viable, working with one accessed in a LAN environment is perfectly feasible.

While the latency of DSL obviously adds to the time required to get leases, the overhead is cut almost in half because no response is needed to end a lease.
%Unfortunately, to acquire a lease at lease a yes/no response is required.

\subsection{Integrated Performance}

Next we compare Ringer and the SSHFS filesystem, chosen because it is also FUSE based, as they perform a sequential read over a gigabit ethernet connection.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/Read50x}
\caption{Time in seconds to read $2^n$ bytes of sequential data.}
\label{read50x}
\end{center}
\end{figure}

The results depicted in Figure \ref{read50x} show elapsed time in seconds to complete a series of reads, from 2 bytes to 2 megabytes.
These are the mean times out of 50 runs.

For this test using Ringer, one client and MDS are hosted on the same machine, while the reading client is mounted on a separate computer. 
SSHFS uses a client-server model. 
The times for Ringer and SSHFS are very similar. 
We interpret this as showing that the added overhead Ringer incurs by finding files via a MDS is acceptably small and is largely eclipsed by data transfer.

Turning our attention to writes in Figure \ref{write50x}, we see that the situation is very similar.

Ringer writes data only to the local disk, and after this the writing client considers the operation to be complete.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/Write50x}
\caption{Time in seconds to write $2^n$ bytes of sequential data.}
\label{write50x}
\end{center}
\end{figure}

It is only when another client reads the written data is data sent over the network. 
For testing purposes, we consider this to be a read. 
Instead of data transfer then, the write times shown here are dominated by the calls to the MDS, which lead to the mostly flat performance of Ringer.  
The uptick at the shows the effect of writing larger files to local disk. SSHFS however sends all writes to the server, and so write time grows with write size.
Especially on a slow connection, network time comes to dominate disk access.

%Finally, we look at the effect changing the block size of Ringer has on read performance.  
\subsection{EC2}

As another experiment to test scalability, we set up a larger Ringer network on a set of nodes using the Amazon Elastic Compute Cloud framework.
Figure \ref{ec2-client} shows the total elapsed time in seconds to setup and take down a network composed of one MDS and between one and ten clients.
Each node is run on a separate small (1 core) instance, with all instances in the same availability zone.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{results/clients-ec2}
\caption{Time in seconds to launch and stop concurrent clients.}
\label{ec2-client}
\end{center}
\end{figure}


%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Write100x}
%\caption{{\b Write:} Time taken to write $2^n$ bytes of sequential data 100 times.}
%\label{write100x}
%\end{center}
%\end{figure}

%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Write200x}
%\caption{{\b Write:} Time taken to write $2^n$ bytes of sequential data 200 times.}
%\label{write200x}
%\end{center}
%\end{figure}

\subsection{Block Size}

We also look at the effect of varying the block size has on Ringer.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/block-size}
\caption{{\b Block-Size:} Time needed to read and write 128K of data as the block size increases, with no network delay.}
\label{block-no-net}
\end{center}
\end{figure}

Because Ringer is build on Fuse, we did not change the size of data which Ringer handles on a local level. 
Instead, we changed the amount of data which is requested whenever a block is not in the local file cache.
As expected, with no network delay (see Figure \ref{block-no-net}), this does not effect things greatly.

Because leases last only the length of the system read() and write() calls, we believe that very long block sizes here would result in faster access times due to the reduction in the number of leases needed for a transaction.
Unfortunately, this system block size is defined by Fuse and is not accessible to the application writer.



%ALSO, (HOPEFULLY), RESULTS FROM A RUN ON EC2 WITH A LARGE NUMBER OF NODES GO HERE (1-3 MORE GRAPHS...)


