\documentclass[12pt,a4paper]{article}

\usepackage{times}
\usepackage[dvipdf]{graphicx}
\usepackage{color}
\newcommand{\TODO}[1]{\textcolor{red}{\textbf{[TODO:#1]}}}

\begin{document}
% HEADER
\title{SCOOP Web Server \\ Project Report}
\author{Karolina Alexiou, Zsolt Istv\'{a}n, Erik Jonsson}

\begin{titlepage}
\maketitle
\bigskip
\thispagestyle{empty} 
\tableofcontents
\end{titlepage}

% BODY
\section{Design and Implementation}

\subsection{Overview}

The aim of the project was to create and test the performance of a web
server which has the capability to answer incoming HTTP GET requests in
parallel. The server was developed in Eiffel Studio 7.0, it supports
requests for html, text and image files, it has built in logging of
request events, and is parallelized using Eiffel's SCOOP
mechanism. The parallelism is
achieved by assigning a \emph{separate} handling object to serve each
of the clients that connect to the server in parallel.
The web server has been tested with various web
browsers\footnote{Firefox, Chromium, Internet Explorer and Opera} and was evaluated using
the \texttt{httperf} benchmarking tool.
An abstract view of the system can be seen in Figure~\ref{fig-overview}.

\begin{figure}[htp]
\centering
\includegraphics[angle=0,width=10cm]{img/overview-color.eps}
\caption{The architecture of the parallel web server}\label{fig-overview}
\end{figure}

\subsection{Connection pool}
This is the main class which is instantiated when the application is
started. 
The server starts listening for connections at a port specified by
command line arguments, or the default 8080 if there are no command
line arguments. 
The LOGGER is
also created and initialized by the main class. Each
incoming client connection is accepted and a separate handling object of class
CONNECTION\_THREAD is immediately created for it. These
CONNECTION\_THREADs will be then responsible for serving clients in parallel.

\subsection{Request handling}
The CONNECTION\_THREAD class manages the connection with a
client. It is initialized with the file descriptor pertaining to the
client socket, and the LOGGER instance, which is a \emph{separate} object, because it is
shared among all the CONNECTION\_THREAD instances. The client socket is
recreated from the file descriptor. This is done by instantiating a
modified version of the NETWORK\_STREAM\_SOCKET class which enables
initialization of a socket with the file descriptor as an argument.




For every incoming client request, the CONNECTION\_THREAD class
instantiates a REQUEST object. If there is some problem reading from
the socket, it closes
the connection. Otherwise it invokes methods from the REQUEST object
to parse and serve the request. If the request has the keep-alive
value in the header, the connection thread keeps the connection alive
for further requests. Otherwise it closes the socket and exits after
serving the request. This
class also invokes the LOGGER upon completion of requests and
connection closing.


The REQUEST class is responsible for reading in the incoming request
from the socket, parsing it to determine which file is requested and
whether the connection should persist, and
finally sending the appropriate response through the connection
socket. If the file requested was not found, the request object sends
the appropriate 404 response. If a directory is requested without
specifying a file, the request object will try to display the \emph{index.html}
in that folder. All files and directories accessible to clients reside
in a \emph{www/} directory relative to where the server executable is running.


\subsection{Logger}
This class writes out event information to the server log. For
each connection request it logs the time it was accepted, the file
request(s) that followed together with the associated HTTP status, and
the time the connection was closed. The LOGGER object is instantiated
once in the root class and then shared among
CONNECTION\_THREAD instances. This LOGGER object is
declared as \emph{separate} so that it can be shared. Since we rely on
SCOOP to handle concurrency on the LOGGER, it is ensured that at any one
time, only one invocation of a method is active, which
means that there will not be any undesirable interleaving of logging
messages. 

\pagebreak
\section{Evaluation}

This section evaluates our implementation using various benchmarks,
inspired by real-world use-cases. We will, on one hand,
show how the parallel version of the web server 
performs better than the serial one. On the other hand, we will
discuss possible reasons why the speedup falls short from
our expectations in some cases. Results also show that, as expected, 
oftentimes logging has a considerable negative effect on
performance.

Before presenting the experiments, a small description of our
experimental setup follows. To run the tests we used two machines
(\emph{server} and \emph{client})
interconnected by Gigabyte Ethernet. The \emph{server} was an
eight core Intel Xeon L5520 machine running at 2.26GHz. The role of
the \emph{client} was fulfilled by a machine equipped with a dual core
Intel Core2 T9400 processor clocked at 2.5GHz. On the client machine we
used the \texttt{httperf} tool to generate the workload, and on the
server we were running the ``finalized'' versions of the web
server. These were 1) serial without logging, 2) parallel without logging, and
3) parallel version with only on-disk\footnote{We disabled all
output to the screen for all three versions in order to achieve maximal
performance and concurrency.} logging enabled.


\subsection{File size}

The first experiment we conducted aimed at determining the effect of
the requested file size on the speedup provided by the two parallel
versions (with and without logging). We expected that as the file size
grows, the overhead of spawning threads in the parallel versions will
become insignificant and the gain of parallel processing becomes
visible.

\begin{figure}[htp]
\includegraphics[angle=270,width=\linewidth]{fig/filesize-speedup}
\caption{Speedup as a function of file size}\label{fig-filesize-graph}
\end{figure}

The file sizes we used ranged from 5KBs to 3.3MBs. We created 8
connections on the client, and requested the same file 10 times from
each connection (in the same session) in parallel. The relative
speedup of the parallel versions can be seen in
Figure~\ref{fig-filesize-graph}. As expected, for small file sizes the
cost of thread management is comparable to communication costs,
therefore the parallel versions bring no improvement at all. On the
contrary, the logging version is even slower than the serial version.
For large files the cost of sending the data over the network becomes
dominant, yielding much better speedups in both cases.

\begin{figure}[htp]
\includegraphics[angle=270,width=\linewidth]{fig/filesize}
\caption{Average response time as a function of file size}\label{fig-filesize-rt-graph}
\end{figure}

Figure~\ref{fig-filesize-rt-graph} shows the average calculated response
times\footnote{In this context, by ``calculated response time'' we mean
  the total runtime of the experiment  divided by the total number of
requests.} for the three versions. Even though the speedup of the parallel
versions is not very large, there is a secondary factor to take into
account. In the serial version of the server, the response times of
requests will actually range from the order of milliseconds to the order of
seconds. This is the result of the fact that in the serial version
once a session is initiated, it has to be served to the very end before
an other request can be handled. If web servers would behave like this
in the real world, browsing the Internet would be impossible. The
parallel version, on the other hand, ensures the interleaved serving
of different clients, which in turn ensures for the clients
that their requests will be served even if someone initiates an
infinitely long session. 

\subsection{Session length}

\begin{figure}
\includegraphics[angle=270,width=\linewidth]{fig/sesslen-speedup}
\caption{Speedup as a function of session length}\label{fig-sesslen-graph}
\end{figure}

In the second experiment we inspected how the total amount of transmitted data
influences the runtime. In this case, however, instead of requesting
bigger and bigger files, the client was sending more and more requests
as part of the same session. The file requested was of 5 KBs, and we
again had a parallelism level of 8 in the client. The numbers in
Figure~\ref{fig-sesslen-graph} prove the intuition that we had: the
logging version does not perform well if there is high contention on
the logger. For the simple parallel
version the amount of transmitted data is the only important factor,
but the version which also does logging is severely affected
by the need of synchronizing on the logger each time a request arrives
and is served. We chose to
request a small file to emphasize this effect even more, and to
illustrate the costs of a centralized logging scheme even better.

\subsection{Parallel clients}

\begin{figure}
\includegraphics[angle=270,width=\linewidth]{fig/conns-file-slowdown}
\caption{The effect of parallel connections on performance}\label{fig-conns-graph}
\end{figure}

After experimenting with the session length and the file size, we
turned to our attention to parallel performance\footnote{the aggregated throughput of all connections}. We were particularly
interested in how a large number of concurrent clients affect the
server performance. In order to do this, we increased the number of
parallel connections the client initiated. Each connection contained
10 requests to a large file on the server. We expected to see a
degradation of performance as soon as the number of threads became
significantly higher than the number of cores.  The graph in
Figure~\ref{fig-conns-graph} shows the performance with varying number
of parallel connections relative to the ``ideal'' case of eight
connections targeting the server machine. To our surprise, the graph
flattens out instead of going down for hundreds of connections.  The
explanation we believe is two-fold: on one hand the dominant action
in these experiments is sending data over the network, and it is
possible to saturate the ethernet link even without using all cores of
the machine. On the other hand, the SCOOP engine may have an overhead
that hinders performance scaling beyond some point. However, we have
no further evidence to support this latter speculation.

\pagebreak
\section{SCOOP Discussion}


\subsection{Separate calls}

SCOOP aims to abstract away a lot of the lower level complications of
concurrency, and to provide the programmer with clear principles of use
to make construction of concurrent programs easier. While in theory the
simplicity of the \emph{separate} keyword is compelling to the
programmer, in practice\footnote{with the current implementation of
  SCOOP in Eiffel} it takes more than just adding a few keywords to
parallelize the source code of an application. 

One of the additional requirements is that a programmer has to wrap
every call on a \emph{separate} target in an auxiliary
method. While we understand the technical reasons behind this requirement, we think
that often the programmer will need to only put the separate call
inside the auxiliary method, in which case this requirement will
only increase the complexity of the source code. An argument
in favor of the wrapper methods could be the
possibility of adding pre-conditions to these methods, that will automatically
be treated as wait conditions. Our concern with this functionality is
that depending on the method argument types, a \emph{require} block may either fire
an exception or wait forever if the condition is not met. We believe
that the two-fold semantics of pre-conditions make code reuse somewhat more
difficult.
 


%SCOOP aims to abstract away a lot of the lower level complications of
%concurrency, and to provide the programmer with clear principles of use
%to make construction of concurrent programs easier. The trade-off is
%that the programmer has less direct control over the exact
%execution.

%While in most
%cases it is beneficial that SCOOP ignores the separateness of
%objects which seem to bring no concurrency into execution, sometimes
%it may decide wrong. In our project the CONNECTION\_THREAD was
%declared as a local \emph{separate} variable in a feature. At first it
%ran concurrently with the CONNECTION\_POOL, but when we later
%introduced a \emph{separate} LOGGER the program ran
%sequentially instead. We suspect that this behavior was related to SCOOP
%code analysis, or some other internal optimization, because making the
%CONNECTION\_THREAD an object variable solved the problem, and resulted
%in parallel execution. 

  
%Somehow adding the logging made the CONNECTION\_THREAD run sequentially, and then of course the LOGGER was also sequential since it is now only accessed in a sequential way.


%This bug was hard to find, since it was very unclear what had gone wrong. In this case SCOOP was not intuitive and the semantics of \emph{separate} confused us more than it helped.

\subsection{Object passing}
Many of the problems we encountered while working with SCOOP are
related to object passing. If we want to call features on any object
declared \emph{separate}, that object needs to expect a
\emph{separate} argument. But if the argument is \emph{separate}, the
object will call back to the original processor any time it needs to
manipulate the respective argument and we end up with less concurrency.

One example of this is in the CONNECTION\_POOL class where we create 
new CONNECTION\_THREADs to serve incoming connections on new
sockets. If we pass the client's socket directly to the constructor of the
CONNECTION\_THREAD it will call back to the CONNECTION\_POOL every
time it needs to send data over the socket, meaning almost all work is
done in CONNECTION\_POOL. Our solution was to instead pass the descriptor of the socket and then
re-instantiate the socket from this descriptor in the CONNECTION\_THREAD.

 
The above ``fix'' produced an other problem, that was caused by the
improper use of sockets and file descriptors.
After the socket of the client was passed to the CONNECTION\_THREAD, the method
returned and since the socket object had no references from other objects, it was garbage
collected, causing the underlying network socket to be closed. As a workaround, we stored
all sockets in an array to make sure they do not get garbage collected.

\medskip

When devising the project architecture we thought that SCOOP would make
implementing a logger in a concurrent way very easy, a perfect example
where parallelism is achievable by just adding the \emph{separate} keyword. However, we
again ran into the object passing problem when we called the
\emph{log} method with string arguments. Here
we circumvented the problem by accepting a \emph{separate} string as a
parameter in the logger, and then converting it character by
character to a local string. This local string can then in turn be
written to file. However, this probably incurs
a lot of overhead because SCOOP will have to switch between processors
for each character that is being logged.

\medskip

In conclusion: while the effort SCOOP makes to empower developers to
reason about concurrent programs on a higher abstraction level is
commendable, the implementation still lacks some features that would
make SCOOP an outstanding candidate for general purpose parallel programming.

\end{document}
