\section{Performance Evaluation}
\label{performance}

AnnoFlow makes no claims regarding enhancing the performance of a
platform as a service application. In tracking policy objects, and
enforcing all filter boundaries, we incur an overhead during the
runtime of the application. However, AnnoFlow must not present such
a significant overhead that it renders the application useless. All of
our test cases are implemented on top of the CloudSpace platform as
a service. We ran each test with and without AnnoFlow enabled. 

While previous work on data flow assertions measured their times in terms
of raw performance on a single system, we are cognizant of the fact that
AnnoFlow is targeted to run in a cloud environment. As such,
we measured the impact on the experience to the user by timing the length
of HTTP requests. The CloudSpace instance was hosted on Amazon's EC2
cluster in Northern Virginia and the user browser was accessing it from
Mozilla Firefox in Blacksburg, Virginia.

\begin{figure}
\centering
\includegraphics{figure/PerfChart.png}
\caption{Response times for our test cases on CloudSpace with AnnoFlow disabled and enabled}
\end{figure}

In the \emph{FilterMethod} test case we constructed a very simple
filtered method with an \texttt{AuditFitler} filter, and passed into it an 
object with a \texttt{TopSecret}. The method would very simply audit
all objects that came through it, utilizing I/O to output to the Logging
framework. Each request generated 50 such method invocations, simulating
50 audits. The overhead of utilizing I/O an extra 50 times is clearly
evident, yielding a 36.68\% overhead.

In the \emph{FilterNetwork} test case we simulated an ``anchored'' piece
of data flowing through a filter. The \texttt{NativeObject} policy instructed
the runtime that this piece of data should not leave the server (for instance,
for sensitve data). When passed through a \texttt{NetworkFilter} filter,
an exception would be thrown. Each request generated 50 such exceptions, simulating
50 pieces of sensitive data attempting to leave the system and being caught.
Because we did not audit anything, our overhead stayed at around 2.87\%.

In the \emph{SanitizeHTML} test case we simulated a method
(for instance one that reads from database) encoding its return value for
HTML to ensure it could be displayed safely. Each request generated 50
string replacement calls for a method return value. This test case did
not require any policies, and only one filter. The overhead was only
0.19\% for this test case. No logging functionality was utilized.

In the \emph{UserInput} test case we simulated a more complex set of
actions. User values were read in, then we attempted to write them
to a database, and finally they were dispayed again to the user. In addition
auditing was enabled throughout the process. This simulated a more
complex use case, and more indicative of an interactive web application.
The overhead was 32.19\%.

In the \emph{XSS} test case we simulated a method that analyzes its
parameters for an attempted cross-site scripting attack. Before a
method would execute, the filter would analyze all parameters for
disallowed patterns. We throw an exception if anything is found
to match a certain disallowed pattern. Each request generated 50 
attempts. The overhead was 6.73\%.

\begin{figure}
\centering
\includegraphics{figure/PerfTable.png}
\caption{Overhead of running AnnoFlow}
\end{figure}