This chapter will mainly focus on the methods we employed to validate and verify that our system, iSquirrel, meets the technical requirements, introduced in Section~\ref{sec:p_m} - that guided its design and development, works as expected and serves as an efficient way to identify and report any bugs which were missed during the testing phase of development.

Testing plays a crucial part in any software development project. The abundance of development methods, software tools and testing suites should have refined testing into an exact science. Unfortunately, this is not the case, even nowadays, it seems that less is known about software testing than any other aspect of software development. In fact, testing remains among the "dark arts" of software development \cite{testing_1}. 
Given the above, we carefully selected some of the best methods out there for testing iSquirrel to deliver an outstanding product and service.

\section {Unit Testing}

\subsection{What is Unit Testing}
Unit Testing deals with testing a system unit as a whole. A unit typically refers  to the smallest testable part of an application. This allows the examination of interaction of methods-functions confined within that unit. Supporting test code, usually referred to as \emph{scaffolding}, may be used to support a test and determine its correctness. This type of testing is driven by the development and implementation teams to ensure that the code meets its design and behaves as intended. To facilitate the programmer unit testing frameworks exist which allow the programmer to use them explicitly without having to worry about their underlying implementation mechanisms. As Java has been the main programming language of our system, inevitably, we opted for JUnit\footnote{More information about JUnit on \url{http://www.junit.org}}.

\subsection{JUnit and iSquirrel}

Due to the nature of our application, our code base is grouped in two parts: the Core API and the Web-based User Interface. The Core API consists of all the application logic including database access, retrieval of online-data, semantic recommenders, users' management e.t.c. The Web-based User Interface includes the rest of the code which includes Java Servlets, JSP files, JavaScript, CSS and media resources such as images. In order to be able to test our code in an efficient manner we tried to keep as much code as possible inside the Core API which we could thoroughly test with JUnit tests.
	
Unit testing was an integral part of all the iterations. Each class that defined a new functionality (except some Java Beans that didn't have any methods) in the Core API was accompanied with a JUnit test. In the first couple of weeks the unit tests were primarily used to test the database access which helped us understand how Hibernate worked. 

During the following weeks, all new features that were developed (both their functionality and database storage) were tested. These include, online-data retrieval from DBpedia, Facebook integration, "liking" pages, searching for friends e.t.c. During the last couple of weeks our tests were focused on the recommenders we developed. Although they were in unit test form, they acted more like integration tests. They involved user creation, adding interests, adding "liked" pages, adding friends e.t.c., all the factors that could affect the recommendations.

A sample of bugs that were discovered as a result of these unit tests are as follows:
\begin{itemize}
\item \textbf{"What other people like" bug:} For each of the URLs that users \emph{"liked"}, a floating point number (popularity/rating) was assigned.  To show the most popular ones first, sorting had to be done based on their rating. The bug we encountered was that, after sorting, some URLs were missing from the list.  To sort the URLs we used an ordered set (java.util.TreetSet) which removes elements with the same popularity as it can only have distinct values. The bug was eventually identified and fixed by using a sorting function from the \emph{java.util.Collections} package.
\item \textbf{Profile Comparison bug:} Profile comparison compares two user's profile entries and measures how closely related their profiles are. Each time the algorithm was used, it was generating different results even though the input had remained unchanged. For optimisation reasons we attempted to move some user profile entries into a separate list, but these entries were not bound back to the database and were eventually getting lost bringing the system into an inconsistent state and showing the user the wrong result. 
\item \textbf{Hibernate Session Issue:} During unit testing we dealt with a number of issues with Hibernate that we weren't previously aware of.  As an example, when a session commits (closes), all the live objects cease to bind to the database. Consequently, single member variables (not Collections) of the object were still accessible with their correct value, but object collections (e.g. the List of Friends of a User) were empty. This bug was fixed by rebinding the objects at the opening of a new session.
\end{itemize}


\section{System Validation}
\label{sysval}
System validation is an essential part of the Software Development Cycle. Validation ensures the system is stable and consistent. 

There are two kinds of System Validation
\begin{enumerate}
  \item User Interface Validation
  \item Core Validation
\end{enumerate}

The two, are covered in more detail below.
 
\subsection{User Interface Validation}

To perform a successful system validation, a developer has to make sure that the testing performed covers all the functionality of the system and it fully exercises the User Interface (UI). Testing our UI has proven to be a difficult task.

The difficulty is twofold: the testing has to deal with the domain size and the sequences of the actions as well. The domain size is obviously a lot larger than of a Command Line Interface program. If the software has a Graphical User Interface (GUI), the number of operations that a user can perform increases dramatically. 

The second problem, action sequencing, is even harder to test in terms of the number of test cases that need to be written. Some functionality of the system might be achievable only if actions are executed with a certain sequence, or even with a number of different sequences. Obviously, increasing the number of operations will result in an increase in complexity, often exponentially, hence the sequencing problem.

Regression testing can also be a problem with UIs. Since the UI is often the most prone to changes in a system, any test that was designed to follow a certain path through a previous version of the UI, has a really short life span. A big number of the tests can only apply to the version of the UI that they were designed for.

\subsection{Validating the User Interface}
We decided to split our UI testing into three main testing areas:
\\
\begin{itemize}
\item Input \& Output testing
\item Failure testing
\item Usability testing
\end{itemize}

The first two areas are covered in more detail below. Usability testing is covered in Chapter~\ref{ch5} due to its direct relation to evaluation.

\subsubsection{Input \& Output testing}
This area of testing is concerned with the state of the system after the user has performed actions on the UI. For example, we need to validate if the system will display the correct page or redirect to the correct servlet and thereafter page, after the user has clicked on a link or has made a search.
\\\\
Testing was done partially with the Selenium Web App Testing System, as described in \ref{subsec:automating_validation}, which allows to simulate user clicks and assert the output, and partially allowing manual navigation through the site and performing various searches and actions.

\subsubsection{Failure Testing}
Failure testing is a way to ensure that the system will not fail when dealing with strenuous or extreme conditions. It is also a way to test how graciously the system recovers, if and after it fails. In terms of the UI, and most specifically our web application, there are a number of ways the system can fail. 
To validate the UI, the following cases had to be taken into account as possible failures of our system, with each one tested accordingly:

\begin{enumerate}
  \item Given that the application is hosted on the internet, the most common failure is an HTTP request going wrong. Since our system makes a heavy use of AJAX calls, a big part of the UI depends on the HTTP requests to the server being successful in order for the elements to be correctly displayed on screen.
\\
\textbf{Testing:} One of the problems of automated testing of asynchronous calls is that you cannot predict when the response will come back from the server. To test our calls, we used a function of the Selenium Testing System, which allows the test to wait-for the response before proceeding with the rest of the test. This came to be very useful to represent a situation where the server failed and generally test the server load-handling ability; probabilistically, after numerous HTTP requests to the server one is bound to fail. Since we did not have to manually make the AJAX calls it was very easy for us to set the system working and then watch how it would handle the requests. The UI was tested in case of error, and the way it made the user aware of the error.
  \item A UI can fail when it comes to cross-browser compatibility. While W3C have set a standard of how elements and styling should be interpreted by a browser, not all browsers abide by that standard.
\\
\textbf{Testing:} The first step to test and validate a web application's UI code (Javascript , CSS \& HTML) is to use tools which check the quality and correctness of the code. There are numerous online tools for this task. We decided to use JSLint\footnote{More information on \url{http://www.jslint.com}} to validate our JavaScript code and make sure that it would run on all browsers. JSLint offers a variety of tools and can even assume a specific browser for the testing. To validate CSS and HTML code we used the W3\footnote{\url{http://www.w3.org}} validation services. We then proceeded with running the application in different browsers, operating systems and screen resolutions to see how it would function. 
  \item Extreme cases of user input or information have to be catered accordingly to make sure not to break the desired interface design and layout. 
\\
\textbf{Testing:} We used manual user input to test how the system coped under extreme inputs. Apart from user input, the nature of our application required us to be extra careful on how to deal with the information coming from third parties, such as DBpedia. Information came in various sizes and formats. To make sure that the UI layout is consistent we decided to set thresholds of the data we would display and hide the rest. To test this feature, automation was our friend once again. A number of AJAX test calls were issued that downloaded content while we observed how the UI was coping.
\end{enumerate}


\subsection{Automating Validation}
\label{subsec:automating_validation}
Validation can be done manually or by using tools that offer suitable APIs to automate the process. We opted for automated validation and used the Selenium Web Application Testing System\footnote{\url{http://www.seleniumhq.org}} to validate our UI.

Selenium is an open-source project that offers an Integrated Development Environment (IDE) with recording capabilities as well as a client/server system known as the Selenium Remote Control that allows the testers to control web browsers locally or on other computers, using almost any programming language and testing framework.
The IDE is offered in the form of a Firefox add-on. Using the add-on we recorded simple tests that could be performed without using the Remote Control (RC) API. 

Where iteration or conditional testing was needed, the IDE was not enough. Hence, we turned to the Selenium Remote Control. 

\begin{figure}[H]
\begin{center}
\includegraphics[width=4in]{resources/pic.png}
\caption{ Searching for a friend UI test\label{ui_test} }
\label{ui_test}
\end{center}
\end{figure}

\newpage

Listing~\ref{lst:selenium} shows a code snippet of how a UI test is performed using the RC Java API.
\lstset{language=Java}
\lstset{backgroundcolor=\color{white}}
\lstset{tabsize=2} 
\lstset{keywordstyle=\color{red}\bfseries}
\begin{lstlisting}[frame=tb, caption=User Interface test using the RC API, label=lst:selenium]
import com.thoughtworks.selenium.*;

public class Snippet extends SeleneseTestCase {
    public void setUp() Throws Exception {
      setUp("http://localhost:8080/", "*chrome");
    }
    public void testSnippet() throws Exception {
      selenium.click("link=Your network");
      selenium.click("1");
      selenium.type("query", "kkk");
      selenium.click("//span[@onclick='follow(this,53136)']");
    }   
}

\end{lstlisting}



\subsection{Core Validation}

\subsubsection{Grey box testing}
Grey box testing is an effective testing strategy, since having the required knowledge of the internal mechanisms of the system allows oneself to test the system productively. In our case, knowing how recommendations are generated was the key for us to use this method effectively.

\textbf{Testing: }To begin with, we created a set of new users. We then assigned interests to them, and also \emph{"liked"} pages for each user . We also tried to assign \emph{"liked"} pages that had something in common. As an example, in one of the tests a user liked articles about Beethoven and Mozart, and the system recommended articles about other German composers, as expected. We enriched the \emph{"liked"} pages for each user, from a variety of subjects to make sure that the dynamic recommendations were targeted, and not off-topic.

A similar approach was followed for the static recommender and the profile comparator. For the former,  we made sure that recommendations were provided to users even if a user was either too specific about what they liked or too general. A list of interests, about all kinds of subjects, was compiled and we observed the recommendations given. 

For the latter, we used a number of users which \emph{"liked"} different pages. Some of the pages had things in common, such as their subject, location or genre (if the article was related to music), but there were also pages completely irrelevant to others. 
Using this input, we ran the \emph{profile comparator} and made sure that the users with the most relevant liked pages had a higher matching "rating", than the others that liked pages about different subjects.

\subsubsection{Manual Code Inspection}
In software development, a small coding error can result in incorrectness and deviate from the original specifications given to the developers.  

Manual code inspection is often used to identify and determine any flaws that are left into some part of the project. Reviewing code often verifies compliance with industry security standards and therefore the team thought it would be important to manually inspect code as often as possible. Mainly, it would easily allow us to disclose potential errors and bugs to the member who wrote the code as they might have failed to identify or realise at the time of writing.

We decided that at each task allocation, one member would be developing and be committing their task on the SVN repository after satisfactory testing (with the use of JUnit) and another would be manually inspecting the code. Members were often coupled based on the knowledge they had to handle the task. This enabled the team to be more efficient and productive as both members were familiar with what they had to deal with.

Most of the time we were able to identify coding errors and resolve the issue in no time. Sometimes it took longer as the testing suits had to be exerting and stricter. 

Some other aspects we were particularly keen on looking to see, apart from errors and bugs, were the code's extensibility and its business logic duplication.

Figure~\ref{fig:codereview} illustrates the code review process we employed.

%\begin{figure}[H]
%\centering
%\includegraphics[width=2in, height=3.6in]{resources/CodeReview.png}
%\caption{Microsoft's Manual Code Inspection \label{fig:codereview}}
%\label{fig:codereview}
%\end{figure}

\begin{figure}[H]
	 \begin{minipage}{\textwidth}
	 \begin{center}
 		\includegraphics[width=2in,height=3.6in]{resources/CodeReview.png}
 		\caption[]{Microsoft's Manual Code Inspection Process \footnote{Taken from \url{http://www.msdn.microsoft.com/en-us/magazine/cc163312.aspx}} \label{fig:codereview} }  
	\end{center}
	\end{minipage}
\label{fig:codereview}
\end{figure}

\subsubsection{Stress Testing}
Stress testing is a form of testing that is used to determine the stability of a given system, and involves testing beyond normal operational capacity. A sample of bugs that were discovered as a result of stress testing is as follows:
\begin{itemize}
  \item \textbf{Slow response time: } One of the biggest challenges our software faced under stress testing, was that DBpedia queries proved to be slow. The response time was reasonable when the system did up to ten DBpedia queries per unit time, but when it involved more queries, the response time became very slow. To deal with this issue, we modified our solution and used a multi-threaded approach to it. For example, when we wanted to get static recommendations for an interest, we divided the work in different threads, that were collecting URLs from DBpedia, on the same interest, but about a different subject/aspect of it. This method introduced a dramatic increase in the speed of the recommender system. The idea is that shorter queries don't have to wait for the longer ones before they get executed, as they are now done in parallel.
  \item \textbf{Long queries: } Another issue with DBpedia querying was that queries could get too long. If their size was more than a maximum number then DBpedia was coming back with an error. Therefore, we had to be careful what we were querying with, and if the query got too big we had to split it into several other queries. 
  \item \textbf{"Out of Memory"  error: } Stress testing also revealed that Java kept throwing an "Out  Of Memory Error" exception when a user's recommendations were too many (approximately more than 55000). As these recommendations have to appear almost instantly when the user clicks on the relevant tab, they have to be generated and collected at another time, and it was done when the user was adding a new interest to their list or "liking" a new article. The user could just sit back without being aware of the process.
 The idea is that we should collect as many recommendations as possible, and then randomly distribute some to the user. When the recommendations were getting too many, we couldn't load them in memory. To overcome this problem, we increased Tomcat's memory limit and kept the breadth of our recommendations intact. 

\end{itemize}
