\section{Experimental Results}

Our experimental setup is depicted in Figure \ref{network}. 
It includes two separate local networks, connected via a ADSL line to a service provider and then to the general switched Internet.
%Network A consists of one D-Link DNS-323 NAS device equipped with a 500 Mhz ARM CPU and 64MB of memory.
%This machine has 2 300GB SATA drives configured in RAID-1.
%It is running Debian Linux.

Network A consists of one 2GHz Intel Core-2 powered Apple Macbook with 2GB of RAM, connected to a router using the wireless 802.11n protocol.
Both client and MDS programs are run on this machine.

Network B consists of an AMD Athlon 64 X2 Dual Core 2.2 Ghz. based server with 2GB of RAM running Ubuntu Linux and a 3Ghz Intel Xeon based Apple Pro.
Interconnect is via switched Gigabit Ethernet.
There is also the same 2GHz Intel Core-2 Apple Macbook on this network as in network A.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/TestingNetwork}
\caption{{\b Experimental Network:} two LAN environments connected via a ADSL line.}
\label{network}
\end{center}
\end{figure}

\subsection{Micro-Benchmarks}

Experimentally, we first perform several micro-benchmarks. 
%These are all performed with a single client and MDS running on the Macbook via the loopback interface.
We measure the time to stat a file and read the file's metadata, acquire a read lease, acquire a write lease, end the read lease, and finally end the write lease.
All of these require communication with a separate MDS process. 
The end lease calls are asynchronous and return without waiting for a response from the MDS however.
Each time is the average time in microseconds measured using the {\tt gettimeofday()} system call of 1000 runs.
Our results are presented in Table \ref{micro}.

\begin{table}[h!b!p!]
\caption{Metadata Micro-benchmarks}
\begin{tabular}{ c | c | c | c | c }
test & local & remote (DSL) & remote (GigE) & mixed (DSL) \\ 
\hline
put rnode & 223 & 14,575 & 746 & n/a \\
  get rnode & 472 & 35308 & 724 & n/a \\
  get read & 413  & 31103 & 501 & 33000  \\
  get write & 791 & 4403 & 524 & 786  \\
  end read & 498 & 33471 & 530 &  57000 \\
  end write & 874 & 5403 & 525 &  829 
\end{tabular}
\label{micro}
\end{table}

Here, local refers to a local client communicating with a local MDS. 
Similarly, remote (DSL) refers to a local client and a remote MDS accessed via a DSL connection
Remote (GigE) is the same but using a gigabit ethernet connection for the client/mds communication.
Finally, mixed (DLS) refers to local client speaking to a local MDS which in turn communicates with a remote MDS via DSL.

The major differences within columns are caused by the synchronous nature of the three {\tt get} calls, which all wait for a response from the server before returning.
The other methods ({\tt end read} {\tt end write} and {\tt put rnode}) all return immediately after any client-side processing.

From these results, we believe that, for a client, while using a remote MDS over a DLS connection is not viable, working with one accessed in a LAN environment is perfectly feasible.

Also, while a mixed local,remote situation adds time to get leases, the overhead is cut almost in half because of no response is needed to end a lease.
Unfortunately, to acquire a lease at lease a yes/no response is required.

\subsection{Integrated Performance}

Next we test Ringer's overall performance as a distributed filesystem.

Because of Ringer's unique architecture where clients always write locally and then propagate changes via reads, we could not find a pre-existing benchmark like Filebench\cite{filebench} which suited our needs.

Instead, we test the speed of Ringer and two other comparable file systems by reading and writing in the following situations; where both the client and MDS are local, where the MDS is connected via gigabit ethernet, and where the MDS is behind a DSL line.

%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Read10x}
%\caption{{\b Read:} Time taken to read $2^n$ bytes of sequential data 10 times.}
%\label{read10x}
%\end{center}
%\end{figure}

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/Read50x}
\caption{Time in seconds to read $2^n$ bytes of sequential data.}
\label{read50x}
\end{center}
\end{figure}

%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Read100x}
%\caption{{\b Read:} Time taken to read $2^n$ bytes of sequential data 100 times.}
%\label{read100x}
%\end{center}
%\end{figure}

%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Read200x}
%\caption{{\b Read:} Time taken to read $2^n$ bytes of sequential data 200 times.}
%\label{read200x}
%\end{center}
%\end{figure}

Our first test is a series of reads, going from 2 bytes to 2 megabytes.
These are the average times out of 50 runs.
We ran this test with both client and MDS on the same machine (local,local), on a local area network with client and MDS on different machines (local,remote (gigE)), with a local client connected to a MDS over a DLS line (local,remote (dsl)), and a local client, remote MDS and secondary remote client, all connected on a local network (local,remote,remote). 

%Also, we plot the time of reading from another client over a DSL and a gigabit ethernet interface.

We compare Ringer to the time taken to run the exact same test using the SSHFS filesystem over DSL and GigE.
SSHFS is similar to Ringer in that it is built using the FUSE interface.
However, it is a traditional client/server system using the OpenSSH protocol to transmit data.

Like Ringer, SSHFS maintains a local cache, reading out of this if possible.
We did four runs total with SSHFS, two with the cache enabled and two without.

%\ref{read10x},  \ref{read100x}, \ref{read200}.

Our results are depicted in Figure \ref{read50x}.

%These are four figures depict how the times change as we go from 10 to 200 repetitions.

We first note that the SSHFS (dsl) combination is prone to somewhat random spikes.
This, we believe, reflects the nature of DSL, although we consistently observed a very large peak in the time needed to read 2 bytes.
We do not have an explanation for this.

The flat nature of Ringer's performance as the data size increases occurs because only in the local,remote,remote (gigE) situation is data read over the network. 
On all others, the data is found in Ringer's local cache, and so disk access is just a sequential read from a local disk.

%Also, 

%% CUT OUT OR RE_DO DATA FOR SSHFH

As expected, Ringer over a DSL connection is noticeable slower than Ringer on a LAN.

Ringer's data transfer over the network is fairly effective (inline with SSHFS), growing exponentially as expected as the size of the data to be read increases for the local,remote,remote situation.

In the GigE network, the difference between cached SSHFS and Ringer are quite small. 
In the DSL network however, we can see that there is roughly \%75 more overhead to read out of the local cache with Ringer than SSHFS.

% Note that for Ringer, except in the local,remote,remote situation, data is being read locally.

%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Write10x}
%\caption{{\b Write:} Time taken to write $2^n$ bytes of sequential data 10 times.}
%\label{write10x}
%\end{center}
%\end{figure}

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/Write50x}
\caption{Time in seconds to write $2^n$ bytes of sequential data.}
\label{write50x}
\end{center}
\end{figure}

%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Write100x}
%\caption{{\b Write:} Time taken to write $2^n$ bytes of sequential data 100 times.}
%\label{write100x}
%\end{center}
%\end{figure}

%\begin{figure}[htbp]
%\begin{center}
%\includegraphics[width=0.48\textwidth]{graphs/Write200x}
%\caption{{\b Write:} Time taken to write $2^n$ bytes of sequential data 200 times.}
%\label{write200x}
%\end{center}
%\end{figure}

Turning our attention to writes in Figure \ref{write50x}, we see that the situation is very similar.

Ringer writes data only to the local disk, and after this the writing client considers the operation to be complete.

It is only when another client reads the written data is data sent over the network. 
For testing purposes, we consider this to be a read. 

Instead of data transfer then, the write times shown here are dominated by the calls to the MDS, which lead to the mostly flat performance of Ringer. 
The uptick at the shows the effect of writing larger files to local disk.

SSHFS however sends all writes to the server, and so write time grows with write size.
Especially on a slow connection, network time comes to dominate disk access.

\subsection{Block Size}

We also look at the effect of varying the block size has on Ringer.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{graphs/block-size}
\caption{{\b Block-Size:} Time needed to read and write 128K of data as the block size increases, with no network delay.}
\label{block-no-net}
\end{center}
\end{figure}

Because Ringer is build on Fuse, we did not change the size of data which Ringer handles on a local level. 
Instead, we changed the amount of data which is requested whenever a block is not in the local file cache.
As expected, with no network delay (see Figure \ref{block-no-net}), this does not effect things greatly.

Because leases last only the length of the system read() and write() calls, we believe that very long block sizes here would result in faster access times due to the reduction in the number of leases needed for a transaction.
Unfortunately, this system block size is defined by Fuse and is not accessible to the application writer.

%% IAN -- TAKE OUT

%TODO HERE:

%Time search:

%Create 100 files. Tag them. See how long it takes to return results

%239 for local, local -- grow as MDS hierarchy grows

%Create 100 files. 100 different. Search for 1, see what we get.

%Create 100 files, 99 same, 1 different. Search for 1 of the same.

%END TODO









