% Chapter 5

\chapter{Evaluation}
\label{Chapter5}
\lhead{Chapter 5. \emph{Evaluation}}

The following chapter describes the evaluation process of all the major components of the final product.

\section{Article Content Extraction} \label{5vace}

Testing on the article extraction technique was performed in two stages:

\begin{enumerate}
\item \textbf{Automatic Removal:} If no text had been obtained or the result was less than one sentence long, it could be automatically classified as unsuccessful.
\item \textbf{Manual Comparison:} In order to ensure that the article's contents was accurately extracted from the web page, the extraction results and the original article were compared manually to see whether too much or too little content had been extracted.
\end{enumerate}

Using a set of feeds comprising of 200 articles, and gathered from 43 different different sources\footnote{ Syndicated RSS feeds from Google News}, the following results were produced:

\begin{table}[ht]
\centering
\begin{tabular}{|c|cccc|}
	\hline
Extraction Success	&	Exact	&	Too Much	&	Too Little	&	None \\
	\hline
Quantity	&	191	&	4	&	3	&	2 \\
Percentage	&	95.50\%	&	2.00\%	&	1.50\%	&	1.00\% \\
	\hline
\end{tabular}
	\caption[Results of Content Extraction Technique]{Results of Content Extraction Technique on 200 articles taken from Google News RSS Feeds}
	\label{tab:extract}
\end{table}

As can be seen in table~\ref{tab:extract}, 95.5\% of articles were correctly extracted from their web pages. Of the remaining articles, only in 1\% of cases was no content retrieved. Included in that figure are any articles for which only 50\% or less of the content was acquired. As can be seen from the results, this occurred very rarely.

For the remaining 3.5\% of articles, there was an almost even split between under and over-exposure to the content. The lack of a complete article was usually due to an uncommon HTML structure, where the content was divided between different types of HTML tags, and located in different areas of the page. This is extremely rare, as can be seen by the results, and is a situation in which no other extraction techniques (as described in Section \ref{3ace}) would be successful.

For the remaining 2\% with more text than required, only in one case was the amount of redundant information over two sentences.\footnote{ It would usually be a sentence similar to \textit{`The BBC is not responsible for the content of external internet sites'}} This extra information was therefore usually insignificant and would in any case be removed by the feature selection process. 

This means that overall success rate of acceptably acquired articles could be considered to be 97.5\%, which would be superior to the extraction methods proposed by Reis et al~\cite{treeedit} and Zheng et al~\cite{vs}. 

\subsection{Limitations of News Extraction Technique}

There are four major issues which effect the performance of this technique:
\begin{itemize}
\item \textbf{Extract appears elsewhere in page:} If the extract is detected elsewhere on the page, then this method would fail as it would mistakenly believe that it has identified the correct branch of the HTML tree. This scenario has, however, never been encountered in testing and is very unlikely to occur.
\item \textbf{Denial of Access:} Many online content providers require subscription to view entire article contents or simply refuse automated crawlers access to their site. This is an issue which would affect any automated extraction approach and is thus unavoidable.
\item \textbf{Lead Paragraph in different format:} Some online news sources, such as The Sun,\footnote{ The Sun Online. \url{http://www.thesun.co.uk/sol/homepage/} [Last Accessed 20/05/09].} will emphasise the first paragraph using a header (H2) tag to draw the reader's attention. This can lead to the rest of the article not being extracted. As this is a rare situation, only occurring for particular sources, individual wrappers can be designed to cope with them.
\item \textbf{Non standard HTML tags:} With this method's current implementation based on the Java Swing HTML parser,\footnote{ Details available from the Sun Developer network. \url{http://java.sun.com/products/jfc/tsc/articles/bookmarks/} [Last Accessed 20/05/09].} only properly formatted web pages that use standard HTML tags can be accepted. Some new sources, such as the NY Times,\footnote{ The NY Times Online. url{http://www.nytimes.com/} [Last Accessed 20/05/09].} incorporate their content within custom tags\footnote{ The NY Times encompass their articles with a $`NYTIMES'$ tag.} which are not recognised by the parser. In order to circumvent this problem, a basic HTML parser would have to be built that could accept unconventional tags. This would not be very difficult to implement.
\end{itemize}

\section{Text Classification}

Below is an evaluation of the text classification techniques used to categorise articles by topic and rank their relevance to a particular location. 

\subsection{Testing Methods}

The classification techniques were evaluated using two testing methods:

\begin{itemize}
\item  \textbf{Stratified ten-fold cross validation:} This method involves partitioning the training data into ten sets, in which the number of instances of each class remains proportionally accurate. One of the sets is removed and used as test data for the classifier trained on the remainder. This provides a clear indicator of accuracy, as it uses various sets of unseen test data to provide a thorough evaluation of the classifier's performance in different scenarios.
\item \textbf{Pre-labeled test set:} A set of articles including topics and locations unrelated to any training data, were used in order to simulate a real-world example. This was particularly useful for anaylsis of the location classifier.
\end{itemize}

\subsection{Kernel Selection}

As discussed in Section \ref{3ker}, there are various different kernels that can be used with Support Vector Machines to increase classification accuracy. With linear methods known to be faster, but possibly less accurate then RBF or Polynomial kernels, it was important to compare speed and classification accuracy to decide which one was to be used in this product.


\begin{table}[ht]
\centering
\begin{tabular}{|c|cc|}
	\hline
Kernel Type	&	Accuracy in \% 	&	Time in s	\\
\hline
LINEAR	&	86.5	&	28.306	\\
Polynomial	&	86.25	&	29.762	\\
RBF	&	91.5	&	29.896	\\
	\hline
\end{tabular}
	\caption[Results of testing various kernels]{Time and Accuracy comparison of various different kernel choices for SVM model}
	\label{tab:kernel}
\end{table}

As can be seen in Table~\ref{tab:kernel}, the results show the linear method to be fastest, and the RBF kernel to have achieved the greatest accuracy. With accuracy more important than running time, RBF was thus selected as the optimal choice for use in this product.

\subsection{Parameter Selection}

To ensure that the Support Vector Machines were working optimally with the RBF kernel, it was important to optimise its parameters. These are \textit{C}, which represent allowable the error rate (as discussed in Section \ref{3sel}) and $\gamma$, a kernel constant. With their optimal values being different for each set of training data, it was important to evaluate them for each new classification model. To avoid having to manually acquire these values, a `grid-search' technique was used to find the optimal parameter pairs, as can be seen in Figure~\ref{fig:grid}.


\begin{figure}[h]
  \centering         
  \includegraphics[width=5in]{localTrain.png}
  \caption{Grid-Search for location classification model as performed by LIBSVM}
  \label{fig:grid}
\end{figure}

Figure~\ref{fig:grid} shows a heuristic approach being used to find the best parameters for the location classifier, with the graph separated into the regions providing the best classification accuracy. Random pairs were selected and tested using ten cross-fold validation. Those that led to better results were further investigated until the optimal values were obtained. 

\subsection{Feature Selection}

As discussed in Section~\ref{3sel}, feature selection was important for extracting the most indicative features for each class. As well as evaluating impact on overall accuracy, each method's effect on vocabulary size\footnote{ Number of words represented in the feature space} was also measured. Smaller feature spaces significantly reduce the processing time required to use SVMs, which in turn minimise the time taken to classify articles.

\subsubsection*{Document Frequency}

\begin{figure}[h]
  \centering
  \subfigure[Document Frequency Effect on Vocabulary size]{\includegraphics[width=3.7in]{DocFreqVocab.pdf}}            
  \subfigure[Document Frequency Effect on Classification Accuracy]{\includegraphics[width=3.7in]{DocFreqAcc.pdf}}
  \caption{Effect of document frequency threshold on vocabulary size and classifcation accuracy}
  \label{fig:docfreq}
\end{figure}

As can be seen from Figure~\ref{fig:docfreq}, the document frequency threshold had a considerable influence on both vocabulary size and classification accuracy. When only selecting terms that appeared in at least two documents, the vocabulary of the training data was reduced by over 50\%, falling from 10,218 to 5598 for the topic classification vocabulary. The location classifier experienced a similar drop in vocabulary size, going from 9994 to 3720 (full tables for these graphs provided in Appendix \ref{tab:dftable}). This is to be expected by the nature of text, with many different words being used to describe the same situation or concept. 

The sharp rise in accuracy of the location classifier is indicative of the disparate nature of its training data. Although two articles may be about a particular location, the number of common terms they share can be fairly small: one could be an article about a local football team, the other describing a local council meeting. 

This means that with little feature selection, the location classifier will perform very poorly. It will not have identified the features that truly indicate an article's relevance to a particular location. Feature selection is not as influential on topic classification, since articles on the same subject will generally share a greater number of common terms.

As can be seen in Figure~\ref{fig:docfreq}, noisy features are removed quickly, leveling out performance, with more aggressive thresholds having a negative effect. This is because infrequent terms will be removed from the training set, regardless of how well they indicate association to a particular class.


\subsubsection*{Information Gain}

\begin{figure}[h]
  \centering
  \subfigure[Information Gain Effect on Vocabulary size]{\includegraphics[width=3.7in]{igVocab.pdf}}            
  \subfigure[Information Gain Effect on Classification Accuracy]{\includegraphics[width=3.7in]{igAcc.pdf}}
  \caption{Effect of information gain threshold on vocabulary size and classification accuracy}
  \label{fig:ig}
\end{figure}

As can be seen in Figure~\ref{fig:ig}, information gain (IG), as discussed in Section \ref{3ig}, also has a significant effect on both vocabulary size and classification accuracy. As with document frequency, location classification accuracy is significantly increased by its implementation. With all features having an IG value of over -1.4, no features are removed before that threshold. When it is set to -1.3 however, accuracy is increased by 42\% and the size of the vocabulary is reduced by over 70\%. 

The optimal IG thresholds will in fact produce better performance, and possess a smaller vocabulary, when compared to the document frequency method. This is not surprising, as information gain provides a better indication of a feature's benefit to the classification process. Combining both feature selection techniques provides no benefit to the classifier, as IG selection possesses all the benefits of the document frequency method, whilst also keeping rarely occurring words that are highly indicative of a particular class. 

\subsection{Size of Training Set}

In order to evaluate whether the size of the training set had any effect on classification accuracy, SVM models with various amounts of training data per class were built and compared using an unseen testing set. 

\begin{figure}[ht]
  \centering         
  \includegraphics[width=4in]{sizetopic.pdf}
  \caption{Effect of Training Set size of Classification Accuracy}
  \label{fig:size}
\end{figure}

Figure~\ref{fig:size} shows how adding an additional ten documents to the training set of each class affects overall classification accuracy. Although it appears to reach an optimum when using one hundred instances per class, the graph shows no discernible pattern. A line of regression would clearly show the results to be noise, leading to the conclusion that substantially increasing the size of each training set has no major effect on classification accuracy. 

This is to be expected, as a well-selected training set will not be more accurate if more documents are added to it. It is far more important to ensure that the contents of training data accurately represent the vocabulary for its particular category.

\subsection{Topic Categorisation} \label{5tc}

The topic classification task can be split between assigning articles into the more general `News' section, or the more specific categories for `Sports', `Business' or `Entertainment.' For the purposes of evaluating topic classification accuracy, it can therefore be considered that true positives are articles correctly assigned to one of the specific categories, and true negatives are those correctly assigned to `News.' 

A decision threshold was established to minimise the number of false positives produced. Any article which received a decision value below this threshold was deemed to insufficiently classified and thus placed in the `News' sections. The evaluation of this threshold can be seen in Figure~\ref{fig:roc}.

\begin{figure}[ht]
  \centering         
  \includegraphics[width=4in]{rocTopic.pdf}
  \caption{ROC curve comparing true and false positive rates for various decision thresholds.}
  \label{fig:roc}
\end{figure}

Figure~\ref{fig:roc} displays the ROC curve for this threshold, where the true and false positive rates are plotted against each other. As previously discussed in Section \ref{5tc}, more importance is placed on lowering the false positive rate. This is because it is more acceptable for a `Sports' article to be misclassified in the generic `News' section, then it is to see it in the `Business' or `Entertainment' ones. 

To ensure this situation was as rare as possible, whilst still keeping a high classification accuracy, the threshold of 0.65 (indicated with a square on Figure~\ref{fig:roc}) was selected.

\subsection{Location Ranking}

In order to evaluate the results of the location ranking method, five sets of news articles were acquired. These featured 20 articles known to be relevant to the desired location (class A), and another 20 which were not (class B). 

These were submitted to the classifier and a list of ranked articles obtained. If any articles from class A were found to be ranked below a certain threshold, they were considered false positives. Conversely, any articles of Class B found to be ranked higher than the threshold value were considered to be false negatives. For this method, false positives were of greater concern as it would mean users could potentially miss articles relevant to them.

\begin{table}[ht]
\centering
\begin{tabular}{|c|ccc|}
	\hline
in \%	&	Ranking Accuracy	&	False Negative Rate	&	False Positive Rate	\\
\hline
Bristol	&	82.86	&	8.57	&	8.57	\\
Durham	&	86.11	&	5.56	&	8.33	\\
Manchester	&	82.35	&	8.82	&	8.82	\\
London	&	79.41	&	14.71	&	5.88	\\
	\hline
\end{tabular}
	\caption[Results of Content Extraction Technique]{Results of Content Extraction Technique on 200 articles taken from Google News RSS Feeds}
	\label{tab:loc}
\end{table}

The results in Table~\ref{tab:loc} are averaged out between the five iterations of the test set. The overall ranking accuracy is considerably lower than the ten cross-fold validation accuracy the location classifier achieved in previous results. There are two main reasons for this:

\begin{enumerate}
\item Although the location classifier works with over 99\% accuracy when discriminating between locations it has been trained for (as can be seen in Figure~\ref{fig:ig}), it will be less accurate when dealing with articles relevant to other locations. This is because, unlike the topic classifier,\footnote{ The generic `News' section encompasses all other topics not covered by their own individual classes.} it does not contain a class indicating the set of articles that are not relevant to any of the trained locations.
\item It is also possible that the training sets do not fully represent the vocabulary of terms relevant to each location. This can explain the false positive rate, as some articles about a particular location may not necessarily contain the same terms as are present within the training data.
\end{enumerate}

Ranking the exact order of relevance to a given location is a very subjective matter, and even a difficult task when performed manually. Other than evaluating, as in Table~\ref{tab:loc}, the method's overall success or failure, the only other method was to check the lists produced and see if they were in a sensible order. These proved largely acceptable with most of the false positives located in the middle of the list.

\subsection{Limitations of SVM approach}

Support Vector Machines with good feature selection are able to perform both topic and location classification to a very high degree of accuracy. The largest constraint, however, is the scope of the training set provided. Any article to be classified using this approach, has to be converted into the feature space the classifier is trained for. This means that if the article contains none of the same words as any documents within the training set, the SVM model will struggle to assign it to a category.

An article about Swansea, for example, is not likely to contain many of the same words that are present in articles about Bristol, Durham, or any of the other locations the classifier is trained for. This means that when it is converted into the feature space using the training vocabulary (see Section \ref{3bog}), it is likely to contain few, if any, values.

It is also important for the SVM model to possess classes that cover the whole spectrum of possible input. This is why the topic classifier is more successful at dealing with articles it is not trained for. The `News' class represents all articles not about `Sports,' `Business' or `Entertainment,' providing greater success when attempting to classify articles on unknown topics such as `Politics' or `Health.'

\subsection{Scalability}

In order to ensure that the current scope of the product could be easily expanded, it was crucial to evaluate the scalability of the classification techniques involved. This was achieved by removing topics and locations from the training sets and evaluating the classification accuracy of these new models.

\begin{figure}[ht]
  \centering         
  \includegraphics[width=4in]{scale.pdf}
  \caption{Chart indicating how the number of classes effects overall classification accuracy.}
  \label{fig:scale}
\end{figure}

Figure~\ref{fig:scale} shows how the number of classes used in each model effects overall classification accuracy. Adding an extra topic or location to the classifier is a very simple process, with no modifications needing to be made to the program. All that is required is the insertion of a new training set, and the re-evaluation of the kernel parameters (as discussed in Section \ref{3ker}). 

It can clearly be seen from Figure~\ref{fig:scale} that there is no statistically significant change in classification accuracy when adding new classes. Although there is a small increase when moving from two to three classes, the classifier is hardly affected by the addition of a fourth class. This shows that the only requirement for expansion of the current product's scope is the insertion of training data for each additional category.

\section{Query Expansion}

In order to test the accuracy of the query expansion technique (as discussed in Section \ref{3fnl}), the minimum size for each section of the newspaper (News, Sports, etc...) was set to a large enough value to require the retrieval of more articles from several of the nearest neighbours. This was tested on many different locations, with the results compared to manual calculation of the order the locations should be presented in.

\begin{figure}[ht]
  \centering         
  \includegraphics[width=4in]{query.pdf}
  \caption{Results of using query expansion to search for nearest neighbours to Bristol, UK}
  \label{fig:brisquery}
\end{figure}

Figure~\ref{fig:brisquery} represents the results of searching for the nearest neighbours to Bristol, according to the GPS coordinates stored within the locations database (see Section \ref{4nn}). As can be seen from the results, the technique correctly identified all of the nearest neighbours in the right order. Although the time taken taken by this process was not measured exactly, it made no significant difference to the overall running time of the product.
