%\section{Introduction}  --not needed (removed from all chapters)
 
The following chapter describes the steps we undertook to evaluate our system and several techniques that we employed while doing so. A thorough analysis of a survey we produced shall be discussed and will explain how it facilitated the process of evaluation. Furthermore, we take a look at other recommender systems and compare them with our system. Lastly, we provide an overview of how we performed as a team and lessons that we learnt from the way in which this project was coordinated.

Before we proceed with evaluating our system, we need to understand why the evaluation of systems plays a crucial role in the software development cycle. Questions such as ``What do we need to evaluate", in ``What context" and ``When do we evaluate"  arise instantly at the time of evaluation.

\emph{"The basic idea is simple. To get something done, you have to start with some notion of what is wanted - the goal that is to be achieved. Then, you have to do something to the world, that is, take action to move yourself or manipulate someone or something. Finally, you check to see that your goal was made. So there are four different things to consider: the goal, what is done to the world, the world itself, and the check of the world. The action itself has two major aspects: doing something and checking. Call these execution and evaluation." [Donald Norman, 1988, p.46]}

\section{Evaluating our semantic commitment}

\subsection{Compliance with Semantic Web Recommender Systems}

Having designed a semantic web recommender system we take look at a few research issues \cite{swrs} that a semantic web recommender system has to abide by and compare our solution with these.

\begin{itemize}

\item \textbf{Ontological Commitment}

We have made use of a single resource; DBpedia. As the core of our application lies within the recommendation of Wikipedia articles, DBpedia was the obvious decision to make. DBpedia attempts to organise Wikipedia's large dataset into a semantic form and hence provide us with the flexibility to [machine] process the users' information and produce recommendations. 
Currently, our system is only dependent on DBpedia. This potentially restricts the range and accuracy of our recommendations. A lot of work can be done to shape these recommendations according to the users' interests. An extension to the system could be its integration with WordNet to justify the users' recommendations using \emph{synsets}, which are, essentially group of words under the same subject, linked together semantically.

\item \textbf{Interaction Facilities}

Our centralised solution, should be suitable for the majority of users where the use of asynchronous requests to the server should not matter. Indeed, our system relies on asynchronous communication to exchange information. We endeavoured to keep communication with the user at a minimum and alert them should problems, of any kind, arise. Perhaps, had we been given more time, extra care and design could have eliminated all prospects of errors.

\item \textbf{Security and Credibility}

We have made a potentially unsafe assumption. In any situation where our system falls under the possibility of an advanced malicious attack our system is undoubtedly vulnerable. No precaution was taken to secure the service, apart from following the standard security protocols such as the use of the MD5 algorithm when storing the users' passwords and SQL injection attacks (secured by the Hibernate Framework).

Recommenders built with social networks need to also consider privacy issues for their users and allow them to control their own privacy.

\item \textbf{Computational Complexity and Scalability}

We tried several solutions to speed up the process of generating recommendations. However there's always room for improvement. Whilst our system can be easily extended having followed a modular approach it will not deal effectively with a vast user set.

\item \textbf{Low Profile Overlap}

Having implemented a profile matching algorithm that generates satisfactory results, thoroughly described in Chapter~\ref{ch3}, we have not devised new ways to ensure that profile overlapping is meaningful. In our system two profiles overlap when users have common \emph{properties} as a result of \emph{``liking"} similar pages.

\end{itemize}

\subsection{Use of semantic information}

The purpose of our project is to make heavy use of semantic information that's freely available on the web. Questions of the nature ``How much did we use" and ``Could we have done better" will be answered in this part of the report and discuss any other resources that we can put into service in the future.
Just like stated in the specification of this project, our one and only resource for semantically annotated information on the web is one - DBpedia.

DBpedia, like previously mentioned in Section~\ref{sec:semanticweb}, describes more than 2.9 million ``things". This is way more than enough that a user could probably ask for. But this enforces a strict limitation on the dataset that we are able to operate on; Wikipedia articles.
As an extension of the project, a user is also recommended music and videos based on their expressed	 interests and profile keywords. The information propagated to the users is in no way justified, since neither YouTube nor Last.fm provide any information about their videos and music in a semantic form.

Although the term \emph{semantic web} still remains unknown to many, there exist a few resources that could prove useful to our system. WordNet\footnote{Wordnet website: \url{http://wordnet.princeton.edu}} for example is large lexical database of English, whose words are grouped together and interlinked semantically with other groups. In future releases of iSquirrel, WordNet could become handy by providing us with synonyms when users input their interests. Such an action would allow us to enlarge the number of recommendations substantially. 

SIMILE\footnote{SIMILE website: \url{http://simile.mit.edu/}}, of MIT, focuses on developing open source tools that empower users to access, manage, visualise and reuse digital assets. Such tools are Piggy Bank, Semantic Bank and Solvent which together bring forth a powerful method to embed web pages with information that it can understand. Piggy Bank inserts this information in the RDF format whose main advantage is the ease of machine processing.
With such knowledge at our hands, we could implement a book recommender using Amazon's or Google's vast book database. Piggy Bank would be able to process the books' attributes in a semantic way and allow us to mach the user's book preferences with these attributes.


 \section{Evaluating our System}
 
 When developing a system that will be accessible by all types of users, it is immensely important that it matches the end-users' needs with an advanced level of satisfaction. As a consequence, it is a requirement that the system is evaluated correctly. 
We pinpointed the right techniques by carefully going through them and have identified the ones which would give us sufficient information/feedback.
 
\subsection{User Evaluation}

\subsubsection{Usability Testing}
Usability testing is irreplaceable when testing a GUI because it provides direct and fast feedback on how the user will interact with the system. Even if the system is functioning perfectly on all levels, it is useless if the average user cannot use it effectively and efficiently. This applies more to the case of our system; being a web application it can be accessed by anyone with a computer and an internet connection. This causes the range and diversity of users to drastically increase and thus designing for usability becomes a harder task.

Testing and validating the system for usability really comes down to giving a prototype of the system to a sample of users to play with. We decided on a set of predetermined tasks that the users would execute when testing. Noting the performance of each user at each task would allow us to gain some insight on how our system performs.

The tests were designed having as a rule of thumb Steve Krug's first law of usability - "Don't make me think!". We wanted the system to be easy to use with everything being no more than two clicks away. The tests revealed a couple of problems with the UI. A notable one was that the users could not easily comprehend that searching could be done with more ways than just searching by the user's name.
After analysing the results we redesigned a part of the interface to eliminate the surfaced problems.

Unfortunately, at the time of testing, the bookmarklet was not completely finished so it was excluded from the tests. These led to problems when the system was released to the public a few weeks later. It seems that bookmarklets are not really well-known and understood amongst the non-computer literate, so most of our users were fumbled when they first encountered our bookmarklet. This is reflected in the user comments in Appendix~\ref{usercomments}.

\subsubsection{Collecting users' opinions}
We produced a questionnaire which we have published online as a survey\footnote{Survey available on: \url{http://www.surveymonkey.com/s/3HPJDY5}} to determine whether our web application matches the users' expectations. We also asked users to offer honest constructive criticism as the results were to be taken into consideration.
The survey tries to determine the answers to questions such as:
  \begin{itemize}
  \item The overall look and feel of the website
  \item Whether their recommendations were relevant based on their interests
  \item Navigating through the website was intuitive
  \item Usage of the bookmarklet
  \item The website's performance
  \item Understanding the integration of Facebook on iSquirrel.  
  \end{itemize}
 
We chose to include a subset of the questions in the report, that support some of the claims we have made in previous sections. These include relevance and adaptation of the recommendations, speed of the system and UI design. Figures~\ref{fig:q1}, \ref{fig:q2}, \ref{fig:q3} and \ref{fig:q4} represent the results we obtained, in the form of histograms. Each figure outlines the question and a brief analysis of the results.

\begin{figure}
	\begin{center}
	\includegraphics[scale=0.75]{resources/q1.pdf}
	\caption{\textbf{What were your first impressions of the website?} The top three responses were \textit{Friendly (62.9\%)}, \textit{Easy to use (60.0\%)} and \textit{Simplistic (57.1\%)} which support our claim for designing a simple and friendly UI.}
	\label{fig:q1}
	\end{center}
\end{figure}

\begin{figure} 
	\begin{center}
	\includegraphics[scale=0.75]{resources/q2.pdf}
	\caption{\textbf{How relevant were your recommendations?} The recommendations proved to be satisfying for most of the users. This highly depended on how specific one's interests were. Generally we are happy with the results, but given more time they could have been even better.}
	\label{fig:q2}
	\end{center}
\end{figure}

\begin{figure}
	\begin{center}
	\includegraphics[scale=0.75]{resources/q3.pdf}
	\caption{\textbf{The recommendations you get improve depending on which pages are marked as Liked. How much improvement did you notice while using the service?} We were actually not expecting our users to see any improvement in the recommendations after liking pages, given that most of them used the system for a very short period of time. Any adapting system needs a certain training period to tailor itself to the user's profile. Fortunately, most of the users (68.6\%) have experienced some improvement which shows that our system can be trained up efficiently to provide more personalised results. }
	\label{fig:q3}
	\end{center}
\end{figure}

\begin{figure}
	\begin{center}
	\includegraphics[scale=0.75]{resources/q4.pdf}
	\caption{\textbf{How would you rate the performance of the website, in regards to loading time?} In section \ref{gui}, we have made claims for designing for performance. We are glad to observe that our design was effective in crunching loading times.}
	\label{fig:q4}
	\end{center}
\end{figure}
 
\clearpage

\subsubsection{Indirect Observation - Logging}
To further expand our user evaluation, we included code to log user actions. We then analysed our logs and came up with some interesting conclusions. Here we present four different aspects of our analysis, depicted in Figures~\ref{fig:c1},~\ref{fig:c2},~\ref{fig:c3} and~\ref{fig:c4}, along with an explanation of each one.

\begin{figure}[H]
	\begin{center}
	\includegraphics[scale=0.62]{resources/chart1.pdf}
	\caption{The ratio of Wikipedia recommendations \textit{``viewed and ignored''} over \textit{``viewed and liked''}. We consider 12\% to be a decent percentage, as 1 in 8 recommendations viewed was liked.}
	\label{fig:c1}
	\end{center}
\end{figure}

\begin{figure}[H]
	\begin{center}
	\includegraphics[scale=0.62]{resources/chart2.pdf}
	\caption{The ratio of YouTube video recommendations \textit{``viewed and ignored''} over \textit{``viewed and liked''}. A low percentage observed but, in general terms, justified as searching YouTube did not involve any semantic data processing.}
	\label{fig:c2}
	\end{center}
\end{figure}

\begin{figure}[H]
	\begin{center}
	\includegraphics[scale=0.62]{resources/chart3.pdf}
	\caption{Ratio of video recommendations viewed over Wikipedia recommendations viewed. One can see the vast difference in the numbers, which can possibly mean that the future of iSquirrel lies in the media sector. }
	\label{fig:c3}
	\end{center}
\end{figure}

\begin{figure}[H]
	\begin{center}
	\includegraphics[scale=0.62]{resources/chart4.pdf}
	\caption{Here we examine the use of the bookmarklet for \textit{``liking''} Wikipedia articles. Approximately 1 in 5 \textit{``likes''} originated from the bookmarklet, a fact that justifies our choice of implementing such a feature.}
	\label{fig:c4}
	\end{center}
\end{figure}

\newpage
 
\subsection{System Evaluation}
 
\subsubsection{Performance}

Nowadays, users get frustrated in the light of the smallest fraction of irresponsiveness. We had to make sure our  system was dealing effectively with response times. 
  
Figure~\ref{time_taken_interests} shows the time in milliseconds taken for recommendations to get generated for several common interests. It is noteworthy to say that recommendations are being generated in the background giving the user the opportunity to browse freely while this is done.

Before analysing the results we thought that the time taken to generate recommendations was relative to the number of of subjects an interest has which are in turn retrieved from DBpedia. Surprisingly, this is not the case as it seems that the time to extract these recommendations is completely uncorrelated to the number of subjects an interest consists of.
 Here, the term subject refers to a semantic ontology that describes an interest.  Clearly the number of subjects an interest has is purely dependent on the interest itself.
 
 \begin{figure}[H]
	\begin{center}
	\includegraphics[width=5.5in, height=3in]{resources/interest_graph.png}
	\caption{Time to generate recommendations \label{time_taken_interests} }
	\label{time_taken_interests}
	\end{center}
\end{figure}

An interesting fact that might explain the irregular behaviour above is the knowledge we have about a particular subject; we do not know, at any time, how many articles/recommendations a subject will generate. Another potential factor that could be causing this is latency at the DBpedia endpoint. Based on the load on the DBpedia servers, some of our requests get answered faster than others.

The time discussed here should not be confused with the time taken to add an interest. Adding an interest to the user's profile appears almost instantaneously.

The system can be enhanced and optimised to reduce these times considerably. Further technical explanation is beyond the scope of this chapter.

\subsubsection{Usability Heuristics}

There are ten general principles for user interface design. The term "heuristics" is used because they are more in the nature of rules of thumb than specific usability guidelines \cite{nielsen}.
We compare our system with these defined principles as follows:

\begin{enumerate}
\item \textbf{Visibility of system status}

iSquirrel copes well with providing feedback to the users at all times. We managed to produce recommendations on screen quite quickly and thus abide by this principle.

\item\textbf{Match between system and the real world}

We tried to minimise misconceptions. We like to think that our system conveys simple and comprehensive instructions that will reduce user misinterpretation and annoyance. 

\item\textbf{User control and freedom}

Our system, being accessible by everyone sticks to this standard. It is just as easy to exit the application, much like the way you accessed it.
Currently, users do not have complete control over their recommendations, interests or privacy settings.
Deleting interests from a user's profile comes with its own intricacies and number of factors that need to be considered. Do we also remove all recommendations, at once or gradually, associated with the removal of this interest?  
With time, all these barriers, if we like to say, can be easily implemented and designed. For now, due to time limitations we concentrated on the core of this project; the recommenders.

\item\textbf{Consistency and standards}

Fortunately, our system does not introduce any ambiguity at any time. User confusion should not arise at all when using the system. To comply with several web standards, as described in Section~\ref{sysval}, our system is validated against these standards to yield a smooth transition from one browser to another. 
\\
\item\textbf{Error prevention}

We took extra care to prevent users from the occasional errors. Where the user needs to provide information to the system, we aimed to keep it [the input] to a minimum (i.e impose limitations) to avoid errors and system inconsistencies. 
This technique prevents problems from occurring in the first place making it a valuable asset in our attempts to evaluate the system. 

\item\textbf{Recognition rather than recall}

As described in Section~\ref{gui}, the majority of users will have stumbled upon such systems (c.f Facebook), in the past, due to their popularity.  Having been inspired by such systems users should recognise the ease of steps to take to navigate through the system. Furthermore, the users' thinking should be effortless. 
We made an effort to diminish the steps one needs to bring the system into a usable state.

\item\textbf{Flexibility and efficiency of use}

One could immediately say that, due to our direct dependency on DBpedia, LastFm and YouTube, our system will not respond as expected. After a meticulous and carefully-thought design, the use of asynchronous requests on the UI allow the system to act in parallel and be efficient enough to meet with the users' standards. 
We have not included any \emph{accelerators} in our system as Nielsen likes to call them. Our system treats all users, both novices and experts, equally and does not provide any shortcuts.

\item\textbf{Aesthetic and minimalist design}

This is probably one of the most important principles. Keeping things simple suggests that it will appeal to the majority of a system's user-base. We avoided using fancy graphics and design that will boast the appearance of the system. Whilst the initial thoughts of users may be positive and enthusiastic for such dazzling effects, the purpose of the system is somehow lost and its functionality becomes insignificant.

\item\textbf{Help users recognise, diagnose and recover from errors}

Once more, we did our best to construct clear and informative messages should an error occur. No error/warning messages that require the knowledge of an expert are passed onto the user. The system intends to foster a safe, user-friendly environment that detracts from the user any concerns which emerge from such behaviour.

\item\textbf{Help and Documentation}

Although, neither help nor documentation is provided on the website, our User Guide in Appendix~\ref{userguide} aims to eliminate the gap between ``thinking of knowing" and understanding. It should provide detailed, yet simple, instructions on how to use the system, taking into account the range of users [readers]; from novices to experts.

\end{enumerate}


\section{Looking at other Recommender Systems: A Comparison}
In this section we take a brief look at existing recommender systems and compare our solution to theirs. Each system's features and techniques will be discussed and analysed to identify  their benefits and drawbacks.

\subsection{Existing recommender systems}

\subsubsection{StumbleUpon}
StumbleUpon\footnote{StumbleUpon's website: \url{http://www.stumbleupon.com/aboutus/}} delivers recommendations in the form of websites taking into account each user's profile. They use social networking principles to create a referral system that is used by more than 9 million users. According to their website StumbleUpon combines collaborative human opinions with machine learning of personal preference to create virtual communities of like-minded web surfers. This is commonly referred to as collaborative filtering which is thoroughly  described in Section~\ref{sec:recommendersystems}.

Each user has their own weblog which changes over time when they rate websites. Users with similar characteristics and ratings form a peer network that links everyone with a primary common interest.

StumbleUpon does not use any semantically tagged resources to provide users with recommendations and is solely based on information gathered by its user base.

\subsubsection{Glue}
Glue\footnote{Glue's website: \url{http://getglue.com/about}}, developed by AdaptiveBlue, was initially only available as a Firefox plug-in. It was therefore limiting its user base to those who used Mozilla Firefox. They have since moved onto a web interface to derestrict and expand.

 Glue is an attempt to use semantic recognition technology to identify movies, books, music and not only when users browse on world-famous websites such as Amazon, Last.fm, imdb.com, Wikipedia, etc. Users are given the opportunity to \emph{``like"} or \emph{``dislike"} the items. Glue based on the feedback will then suggest users books, movies, music, etc that they might like based on their personal tastes and what their friends like.
 
\subsubsection{Amazon}
Amazon\footnote{Amazon's website: \url{http://www.amazon.com}} is one of the largest electronic commerce companies. Its recommender is using a large data pool on user shopping behaviour and does not require the user to input anything to their profile. This pool of data involves, the user's purchase history and feedback received from the user. Amazon uses passive filtering, described in~\ref{sec:recommendersystems}, in the sense that it monitors the user's browsing history to recommend similar products/items to the user. Amazon always justifies its recommendations with a simple explanation for these items.


\subsection{The comparison}
Table~\ref{tbl:sys_comparison} highlights features that each recommender boasts.

\begin{figure}[H] 
	\begin{center}
	\includegraphics[width=5.3in]{resources/comparison_e.pdf}
	\caption{ Recommenders and their features\label{tbl:sys_comparison} }
	\label{tbl:sys_comparison}
	\end{center}
\end{figure}

Our system, although in a prototype state, keeps up well with current products in the market that have excelled. 

\begin{enumerate}
\item It is community based meaning you can befriend other people who you might share similar interests with and learn about their recommendations. 
\item It is adaptable which will \emph{``mould"} each user's profile to their preferences.
\item  The user is able to give feedback to the system which alters their profile. 
\item It uses both a content-based and a collaborative filtering approach to recommend music, videos, as well as semantic resources for recommending Wikipedia articles.
\item It does not provide justification for recommendations, disallowing the user to determine the origin of each recommendation.
\end{enumerate}

%\subsubsection{iSquirrel}
%iSquirrel\footnote{iSquirrel website: \url{http://www.websquirrel.net/}} is an Imperial College project attempt to mainly generate Wikipedia articles based on the user's profile using a large, semantically-tagged database of Wikipedia articles, named DBpedia. The system integrates with Facebook to speed up the registration process and also discover any of your friends who use the system. Users are asked to fill in their profile with a list of their interests so that the system can begin recommending articles. iSquirrel is learning from the users' \emph{``likes"} to adapt specifically to the users' interests and needs. 


%Donald Norman's model of action has been extensively used across the world to assist people in human-computer interface design; to design interactive systems that are enjoyable to use, engaging and accessible. The design of such systems should be human-centred that puts people at the centre of the design process. 
%Designing such systems should minimise the length of the evaluation process and effectively allow you to focus on other key factors such as implementation and testing.



%\section{Why do we need to evaluate - Context of Evaluation}
%An interesting question yet with a very simple answer. Evaluation is the key to developing successful software applications. Both the designers and developers of such systems need to examine whether it will appeal to its user base. Nowadays, users identify with pleasant and engaging applications. 

%Evaluating often amounts to a substantially lengthy procedure. If we manage to integrate evaluation into every iteration then we succeed in receiving feedback on our design ideas, major misconceptions and bugs. With such information at hand, great emphasis can be given on these problems rather than arguing among ourselves (the designers/developers).
%We took a similar approach by incorporating an evaluation technique in each iteration gaining in efficiency and software productivity.

%Our system, being a web application, opens up door to a variety of features and aspects that need to be evaluated. We may start by asking users whether the website is fast, whether the browsing experience is interactive, smooth, engaging, whether it serves its purpose. The list is endless. 

%We may need to consider where to evaluate our system. This is straight-forward since a web application usually runs on a computer, and ours is no exception. Users will need a computer and an internet connection to provide us with feedback.

%We will show you how, with the use of different evaluation techniques, we generated our results and how our system was affected by these findings.

%\subsection{Cognitive Walkthrough}
%Cognitive walkthroughs involve simulating a user's problem-solving process at each step in a human-computer dialog and allow the evaluator to determine the user's progress when guided step by step at every interaction. 

%Since our system will be primarily used without any prior training, this method allows us to analyse our design in terms of exploratory learning. The user themselves must learn how to use the system by exploring its interface. It will give us a clear insight whether the system has been successfully designed so that information is easily conveyed to the user.

