\section{Experimental Results}

Testing is still ongoing.
We have completed a few small scale evaluations, presented below, and feel that the filesystem is relatively stable, even in a wide area, low bandwidth environment.
Further tests will evaluate Ringer's performance, including search, as a large distributed file system.
Using the Amazon Elastic Compute Cloud, we are planning on measuring the scalability of Ringer as it grows from from a few client nodes and one MDS to hundreds of clients and MDSs.

\subsection{Micro-Benchmarks}

We first present several micro-benchmarks. 
%These are all performed with a single client and MDS running on the Macbook via the loopback interface.
We measure the time to stat a file and read the file's metadata, acquire a read lease, acquire a write lease, end the read lease, and finally end the write lease.
All of these require communication with a separate MDS process. 
%The {\tt end\_read()} and {\tt end\_write()} calls are asynchronous and return without waiting for a response from the MDS however.
Each time is the average time in microseconds measured using the {\tt gettimeofday()} system call of $1000$ runs.
Our results are presented in Table \ref{micro}.

\begin{table}[h!b!p!]
\caption{Metadata Micro-benchmarks}
\begin{tabular}{ c | c | c | c | c }
test & local & remote (DSL) & remote (GigE) & mixed (DSL) \\ 
\hline
{\tt put\_rnode()} & 223 & 14,575 & 746 & n/a \\
  {\tt get\_rnode()} & 472 & 35308 & 724 & n/a \\
  {\tt get\_read()} & 413  & 31103 & 501 & 33000  \\
  {\tt get\_write()} & 791 & 4403 & 524 & 786  \\
  {\tt end\_read()} & 498 & 33471 & 530 &  57000 \\
  {\tt end\_write()} & 874 & 5403 & 525 &  829 
\end{tabular}
\label{micro}
\end{table}

{\em local} refers to a client communicating with a MDS over a loopback device. 
{\em remote (DSL)} refers to a client and a remote MDS accessed via a DSL connection.
{\em remote (GigE)} is the same but using a gigabit ethernet connection for the client/MDS communication.
Finally, {\em mixed (DLS)} refers to local client speaking to a local MDS which in turn communicates with a remote MDS via DSL.

The major differences within columns are caused by the synchronous nature of the three {\tt get*()} calls, which all wait for a response from the server before returning.
The other methods ({\tt end\_read()}, {\tt end\_write()} and {\tt put\_rnode()}) all return immediately after any client-side processing.

%From these results, we believe that, for a client, while using a remote MDS over a DLS connection is not viable, working with one accessed in a LAN environment is perfectly feasible.

While the latency of DSL obviously adds to the time required to get leases, the overhead is cut almost in half because no response is needed to end a lease.
%Unfortunately, to acquire a lease at lease a yes/no response is required.

\subsection{Integrated Performance}

Next we compare Ringer and the SSHFS filesystem, chosen because it is also FUSE based, as they perform a sequential read over a gigabit ethernet connection.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/Read50x}
\caption{Time in seconds to read $2^n$ bytes of sequential data.}
\label{read50x}
\end{center}
\end{figure}

The results depicted in Figure \ref{read50x} show elapsed time in seconds to complete a series of reads, from 2 bytes to 2 megabytes.
These are the mean times out of 50 runs.

For this test using Ringer, one client and MDS are hosted on the same machine, while the reading client is mounted on a separate computer. 
SSHFS uses a client-server model. 
The times for Ringer and SSHFS are very similar. 
We interpret this as showing that the added overhead Ringer incurs by finding files via a MDS is acceptably small and is largely eclipsed by data transfer.

Other results, (not shown) compare the performance of Ringer and SSHFS on writes, as well over a DSL connection.
We also compare Ringer and NFS.
Finally, we look at the effect changing the block size of Ringer has on read performance.  

