\chapter{Introduction}
\pagenumbering{arabic}
\section{Initial Motivation}
It is not uncommon for non-functional requirements to be ignored when discussing a potential solution to a problem, only to find that such a solution would be impossible, or infeasible, to implement in a real world environment \citep{website:nonfunc}.
It is because of this that non-functional requirements are important to the implementation of a software application. One of the concerns that may arise over the course of the development life cycle of a software application is measuring its performance, with respect to these non-functional requirements. Unlike with functional requirements, which are relatively easy to verify, non-functional behaviour monitoring is difficult to do without an external tool.

Existing solutions to this problem include source code instrumentation and Java Virtual Machine (JVM) monitoring tools. The main problem with the source code instrumentation libraries, such as log4j, as a method for monitoring the non-functional behaviour is that the use of such a technique requires that the developer knows exactly which non-functional properties are to be measured in advance. Any additional properties that require measurement after the submission of the application will require a change to the source code which, in many production environments, can be a tedious and lengthy process, due to many company policies requiring paperwork for such modifications. The issue with using a JVM monitoring tool is that it can have a severe impact the application's runtime, potentially effecting the results of the tool and, hence, not being very useful for the verification of the non-functional requirements.

Aspect-oriented libraries, however, do not suffer from either of these issues, or limitations. Unlike source code instrumentation, an aspect-oriented library can be configured in such a way as to not require constant changes to the compiled class files. This means the developer will not be required to fill out the paperwork needed in order to change what is being monitored. Also, unlike a JVM monitoring tool, the impact upon the application can be minimized. Though the total runtime will be longer, the amount of time spent running the application's methods should not be greatly affected. Because of this, it was decided that the project would be made as an aspect-oriented library.
\newpage
\section{Aims \& Objectives}
The aim of this project is to create a stand-alone Java library that enables the user to monitor the non-functional behaviour of their Java application. It should:
\begin{itemize}
\item Interrupt methods prior to their execution.
\item Execute code that logs, or enables the logging of, the method's initial impact upon the monitored non-functional behaviour.
\item Run the interrupted method.
\item Interrupt the application subsequent to the method returning.
\item Execute code that logs, or enables the logging of, the method's eventual impact upon the monitored non-functional behaviour.
\end{itemize}
The system is is intended to be usable by anyone, be they a corporate employee, or an independent developer.

\section{Summary of Achievements}
Over the course of the project, the following has been achieved:
\begin{itemize}
\item The development of a library capable of measuring the non-functional characteristics of a Java application in real-time.
\item The development of a framework that can be extended to measure more non-functional characteristics.
\item The evaluation of the library for an application under normal run conditions, abnormal run conditions, and using JUnit tests.
\end{itemize}

\section{Dissertation Structure}
The remainder of the document is structured as follows:
\begin{itemize}
\item Chapter \ref{chap:prodef} - \nameref{chap:prodef} describes the problem in greater detail.
\item Chapter \ref{chap:back} - \nameref{chap:back} discusses the limitations of the available technologies for solving the problem, and describes and evaluates the potential methods for implementing a new solution to the problem.
\item Chapter \ref{chap:req} - \nameref{chap:req} defines the functional and non-functional requirements of the library.
\item Chapter \ref{chap:desimp} - \nameref{chap:desimp} goes into detail as to how the library is structure, how it is implemented, and how it is configured.
\item Chapter \ref{chap:eval} - \nameref{chap:eval} evaluates the library, using two case studies, and discusses the findings of the evaluations, as well as the issues with the evaluation.
\item Chapter \ref{chap:conc} - \nameref{chap:conc} summarises the points brought up through this document, and reflects upon the project, as well as mentioning areas for extension.
\end{itemize}

\chapter{Problem Definition}
\label{chap:prodef}
One of the potential issues encountered by a software developer trying to implement a software application is tracking down which part of their code is causing a bottleneck, which is slowing the whole application down. It can take a lot of time trawling through thousands of lines of code to find the call to the method which is slowing everything down and, in a lot of cases, simply looking at the code may not be enough to identify the cause.

Another possible issue can be found at the end of the project life cycle, when the non-functional requirements of the system must be verified. A system is needed in order to prove that the software application meets its non-functional requirements. This may be possible to do simply by manually timing the code, and keeping track of the various outputs, but this is simply not feasible for large, complex software applications, especially for those that do not have an obvious output (such as applications that send data over a network).

Before we look into solutions to these problems, we must first discuss if they are even a real issue. Why do non-functional requirements matter? Consider a system that has many useful functions, that fulfils all of its functional requirements. Let's say it is a calculator, which can do everything from simple addition to solving high-order differential equations. This sounds very impressive, but if it takes 20 years to do any of these functions then it is not a very useful calculator. Although this is an extreme example, it doesn't make the point any less valid.

\chapter{Background}
\label{chap:back}
\section{Agile}
Now that this has been identified as actual issue, the next step is to look at when we need to solve this. In the standard waterfall model, this would be done as part of the testing stage, after the code has been fully implemented. The issue with this is that it is very difficult to alter the who software package if there are many issues discovered. It is much better to find these problems sooner, and root them out as soon as possible.

This is where the Agile model comes in. Testing is run continuously \citep{website:agile}. With continuous integration systems, it would be even easier to identify the issues as they appear. A system that could track the non-functional requirements of the system could easily be added to the testing stage of a continuous integration environment and, although it may be more difficult to put a simple pass/fail rate on it, it would be easy to identify the bottlenecks if they appear by checking through any logs generated.

As part of the Agile model, this theoretical solution to the problem must be something that can be used regularly, and be reasonably efficient. There are currently such several potential solutions available to deal with this problem.
\newpage
\section{Existing Solutions}
\subsection{Java Interpreter}
One potential solution to the problem is to use an interpreter to identify which lines of code are slowing everything down. The main issue with this method is that, short of creating breakpoints almost everywhere in your code, and sitting by it whilst it runs, resuming it when it hits a break point, and manually taking note of the time taken, this is a highly inefficient method of gathering the performance of your code. The best way to solve this is to find a tool, or library, that will run, and report back to you.

\subsection{Java Virtual Machine Application Monitoring Tools}
Perhaps the most common solution to this problem is based around monitoring the JVM. This is useful if you are trying to measure the performance of the whole application over time, and allows you to identify the existence of a bottleneck within your code.

The issue with using this method is that is can have a large impact upon the application's performance, perhaps leading to results that are not comparable to running under normal circumstances, and hence, not useful for the verification of non-functional requirements.

\subsection{Source Code Instrumentation}
Another option is to have the monitoring tool running at the application level. This can be done by creating a, or using an existing, source code instrumentation library. This would alter the byte-codes of the class files that you are running, and enable you to see exactly where in the code the bottlenecks are occurring by adding methods into the compiled classes that time how long each method takes to run.

The issue with doing this is that certain organisations have strict policies on the changing of compiled code, requiring a large amount of paperwork and reviews before the changes can be made. This is a rather large process to go through for the sake of timing your code. However, there is an alternative that offers the functionality of source code instrumentation without having to change the compiled code.

\subsection{Aspect-Oriented Libraries}
Aspect-oriented libraries work by interrupting the code before and/or after the calls or execution of methods. This technique avoids the main issue with source code instrumentation, but has all of its functionality. The main issue here is that, although this is a theoretical solution to the problem, there are currently no such libraries available to solve this particular problem. Because of this, it was decided that the project would be made as an aspect-oriented library.
\newpage
\section{Aspect-Oriented Programming}
\subsection{Overview}
The aspect-oriented programming paradigm is a method of programming that is meant to compliment object-oriented programming. It allows an increasing levels of complexity in a more manageable way \citep{book:saop}.
A large amount of object-oriented code contains ``boilerplate'' code, a term which refers to code that is very similar for many parts of a system. This is known as a ``cross cutting concern''. Aspect-oriented programming can be used in order to separate these cross cutting concerns from the main program logic, by implementing them as an aspect \citep{notes:psd3}.

\subsection{Terminology}
There are five terms that are important to aspect-oriented engineering:
\begin{itemize}
\item Aspects.
\item Join points.
\item Advice.
\item Pointcuts.
\item Weaving.
\end{itemize}

Aspects are the aspect-oriented equivalent to classes. They may contain methods, and properties, just as a class does. Aspects may interact with these classes without the classes needing to explicitly refer to them. There is only one instance of each object.

A join point refers to the place within the original source code where an aspect interrupted it. It is similar to the stack in the way it holds the previous method call, so the code knows where it needs to return to after the aspect is finished.

Advice defines a way through which the aspect can interact with the class. It may interrupt a method before and/or after its execution. This is used by a pointcut in order to know when the code should be interrupted.

A pointcut is what tells the code when the aspect needs to be applied, usually through its advice. The pointcut generates a join point each time the aspect interacts with the source code.

The method through which the aspects are linked with the source code is known as weaving. Weaving is where an agent tells the JVM when to run the aspects, given its current position in the source code. There are three methods for weaving aspects into classes. Compile-time weaving, run-time weaving and load-time weaving.
\newpage
\subsection{Weaving}

\begin{figure}[H]
	\caption{A diagram showing how compile-time weaving works}
	\label{fig:ctw}
	\centering
		\includegraphics[width=\textwidth]{images/ctw.png}
\end{figure}
Compile-time weaving, as the name suggests, involves weaving the class files and the aspects at compile time. The main issue with this is that, if you wish to make any changes to the aspects, you must recompile everything. As such, this method has the same issue as source code instrumentation, in that is changes the compiled code.
\begin{figure}[H]
	\caption{A diagram showing how load-time/run-time weaving works}
	\label{fig:ltw}
	\centering
		\includegraphics[width=\textwidth]{images/lrtw.png}
\end{figure}
Run-time weaving checks the execution calls of each method against the list of pointcuts in real time. This solves all of the previously mentioned problems, but causes another. As each call of each method is being checked against the list of pointcuts, the code will be executed much slower, meaning that it may be difficult to tell whether any detected bottlenecks were caused by the target code, or are a side-effect of the run-time weaver.

Load-time weaving is a compromise between the two. Aspects and classes are each compiled then, at runtime, when a class is loaded, the agent weaves the aspects with the class, as appropriate. This means that any changes made to the aspects require that only the aspects are recompiled. As such, it was decided that load-time weaving was to be used in order to instrument the library into the user's application.

\subsection{Support for Aspect-Oriented Programming}
\label{sub:sfaop}
There are multiple extensions available to allow Java support for aspect-oriented programming. This section will provide a brief overview of some of the more commonly used extensions, and the pros and cons to each of them.

To begin with, there is the widely used Spring AOP. Perhaps the most commonly used aspect-oriented programming extension, Spring AOP is a simplified version of AspectJ. It can only put advice over the execution of operations on Spring beans, but it does not require a special compiler or weaver \citep{website:spring}.
Unfortunately, it does not appear to support load-time weaving, making this a non-option.

AspectJ, on the other hand, being the umbrella extension for Spring AOP, does have the capacity for load-time weaving, and allows the use of aspect-oriented programming functionality without the need to include any Spring at all. This made it a great candidate.

Finally, there was Javassist which, although technically is a form of byte code manipulation, does have the functionality of a fully fledged aspect-oriented programming extension. The main issue with this is its limitations involving generalising, making it difficult to configure over a package. This issue is, in itself, a positive for the configurability of individual classes, as it is very specific, however I felt that this would complicate the code structure too greatly.

After considering all three options, AspectJ seemed to be the obvious choice. A decision then had to be made as to which AspectJ format should be used. The options were the native AspectJ style, which involved code written in a very similar way to the structure of most Java code, or @AspectJ annotation style, which involved using annotations, similar to how JUnit uses them. There were no real pros or cons for either, other than the readability of the code, so @AspectJ annotation style was used, due to a preference for the style.
