% Chapter 3

\chapter{Technical Basis}
\label{Chapter3}
\lhead{Chapter 3. \emph{Technical Basis}}
                      
This section describes the technical theory used in order to implement the local newspaper generator. This includes descriptions both of the techniques implemented and of alternative methods, as well as explanations as to why they were not selected. 

The particular issues that will be addressed are:

\begin{itemize}
\item The aggregation and extraction techniques used to acquire articles for analysis. 
\item Vector representations of documents, as well as the different methods for representing words in the feature space.
\item Techniques for selecting the best features for text classification. 
\item Statistical classification techniques, with a detailed explanation of Support Vector Machines. 
\item The topic and location categorisation of online news articles.
\item Detection of nearest neighbours to a particular location.  
\end{itemize}

\section{Online Content Monitoring}

The first task the product will need to perform is the acquisition of content from online news sources. This is essential in order to provide sufficient data for the testing of the proposed topic and location categorisation techniques. There are two distinct methods of collecting and monitoring online news:

\subsection{Web crawling}

The naive approach to collecting news articles is to use web crawlers which search through online news sources and index any new articles detected. This approach is quite limited as it requires a crawler capable of automatically differentiating between news articles and the other web pages a news source may contain. These crawlers must therefore be manually tailored to each source, and constantly updated to ensure continued performance. 

With no knowledge of when these sites are updated, the crawlers simply have to be relaunched at regular intervals in order to acquire new content. They are also unable to guarantee the detection of all articles present on a particular source (as explained in Section \ref{2aggr}). This `hit and miss' approach would therefore not be the optimal choice for this task.

\subsection{RSS Feed Monitoring}

As discussed in Section \ref{2rss}, RSS feeds provide syndicated content from news sources. Using these feeds would therefore directly address and resolve the issues faced by using web crawlers, as they are automatically updated with new content and can be tailored to provide only news articles. This removes any delay between an article's publication and detection by the program.

Another benefit of using this approach is the metadata\footnote{ RSS feeds will almost always provide the title, publication date and an extract from the article} attached to each article, which can provide useful information for the purposes of content extraction and classification. 

With almost all online content publishers providing these feeds, they shall be the principle form of input into the program. 

\section{Article Content Extraction} \label{3ace}

Although RSS feeds provide links to new articles, they rarely provide the full text required for proper analysis and classification. This information therefore needs to be extracted from the web page on which the content is actually displayed.

This is a complex task as there are no general standards for the way news is published online. With web pages containing significant amounts of redundant information, simply extracting all text from the page would lead to very noisy data and thus inaccurate classification. There are many different approaches that have been proposed for solving this problem:

\subsection{Visual Wrapper Approach} \label{9vw}

The simplest method is to define a schema for each particular news source which sets out exactly where the relevant information\footnote{ Refers to the article's title, author and full content} is located within the web page. This can be as simple as indicating for a particular news source: \textit{Extract title from position X, and all text between position Y and Z within the HTML code.}

Also known as a visual wrapper, this approach works on the assumption that every online news source possesses a template for its news stories, meaning that the article content will always appear in the same location on the page. 

These schemas need to be manually constructed for each new source and updated each time its template is altered, limiting scalability. Some approaches have been proposed that automate the process of learning these wrappers, such as those suggested by Zheng et al \cite{vs} and Meng et al \cite{vwrap}. Although achieving a 95\% success rate \cite{vs}, they are dependent on manually labelled training data from each source, thus also suffering from a lack of adaptability to new templates.

With the large number of disparate sources that local news can be acquired from, especially when attempting to cover a very wide range of locations, this would not be a viable approach for this project. 

\subsection{Extraction Using Tree-Edit Distance}

Reis et al \cite{treeedit} use the consistency of HTML DOM trees (as explained in Section \ref{2dom}), and the concept of Tree-Edit distances \cite{tanaka}, to automatically extract news from web sites. 

A form of visual wrapper detection (as described in \ref{9vw}), it generates an extraction pattern from clustered pages which also works on the assumption that each news source shares a common template.

This process involves acquiring a set of example web pages for each source and using the DOM tree to convert them into ordered rooted trees.\footnote{ Trees that possess a fixed root node and whose order is defined} These trees are then compared against each other by calculating the minimum number of operations needed to transform one into the other, a concept formally defined as mapping by Tai \cite{tai}. This allows web pages with common formats and layout characteristics to be clustered together and their contents automatically extracted.

This method shares the same problems as other visual wrapper approaches as it relies on a manually labelled training set for each individual news source. It has also only been shown to achieve 87.71\% accuracy \cite{treeedit}, significantly lower then many other proposed methods.

\subsection{Perception-Oriented News Extraction}

Chen and Xiao \cite{percep} have proposed an extraction approach based on the visual perception of a web page. This method converts web pages into a Function-Object Model (FOM) \cite{fom}, which attempts to define the function of each element, as well as its content, within the structure of a web page. 

This approach works on the assumption that web sites are designed for humans to easily perceive the different elements they contain. It therefore uses the FOM model to simulate a human's visual perception of the page and automatically detect an article's content within a page.

Although it can be used more generically than visual wrappers, and is reported to have 99\% extraction accuracy \cite{percep}, it requires the HTML code of a web page to be converted into an FOM model, a computationally expensive task \cite{fomconv}. This process is also not well documented and is subject to a patent by Google \cite{fompatent}, meaning that any product adopting this approach could not be used commercially.

\subsection{Tree Analysis using RSS Extract} \label{tarss}

This novel extraction technique uses information provided from RSS feeds to accurately extract article content in a computationally efficient manner. Although RSS feeds tend only to provide an extract from the article, it is possible to use this information to obtain the full content without the need for any prior knowledge about the news source.

Extensive analysis of various online news sources, as well as previous work in this area \cite{percep, vs}, have shown that the content of an article is usually encompassed within a larger structural element, such as a DIV\footnote{ Defines a division or a section in an HTML document.} tag. When representing this web page as a DOM tree (as discussed in Section \ref{2dom}), the article content is thus contained within one particular branch of it whose root would be the DIV tag\footnote{ Or the web page's root node if no DIV is present.}.

Using the axioms defined by Chen and Xiao \cite{percep}, coupled with the structural analysis of many different online news articles, it also possible identify an article's full text as being located within the same HTML tags at a constant depth within the DOM tree.

Using the extract of the article provided by the RSS feed, it is thus possible to locate the branch of the DOM tree containing the article text, as well as the HTML tag encompassing it. The contents of all of these tags, at the given depth within the particular branch, can thus be extracted to provide the full article text. Although this method depends on an extract from the article being provided, it does not require any training data or tailoring to individual news sources, allowing it to be fully scalable. 

With RSS feeds being used as the principal form of input for the local newspaper generator, it is thus the optimal choice of extraction technique for this project.

\section{Document Representation} \label{3dr}

In order to use statistical text classification methods (as described in \ref{1textclass}), documents must be represented in a numerical form. This is the process of taking document $\mathbf{D}$, and placing it in a vector space, as first proposed by Salton \cite{salton}. This can be represented as:


\begin{equation}
\mathbf{D = <t_{1},t_{2},t_{3},...,t_{i}>}
\end{equation}

where $\mathbf{t}$ is a feature of document $\mathbf{D}$. 
 
There are many possible approaches to converting text-based information into a numerical feature space, with the two main methods described below.

\subsection{Bag of Words} \label{3bog}

The most common approach is to represent the corpus (set of documents) as a bag of words. This refers to converting text into a list of the words, represented as a vector. Each word present in the corpus is assigned an index value which indicates the order in which it appears within the vector. These vectors are then combined into a matrix as can be seen in figure \ref{fig:bog}.

\begin{figure}[h]
\begin{center}
\[ \left( \begin{array}{ccc}
0 & 0 & 2 \\
4 & 0 & 4 \\
0 & 3 & 2 \\
5 & 0 & 1 \end{array} \right)\]
\end{center}
\caption{The document-term matrix representing the bag of words for a corpus of 4 documents containing 3 features}
\label{fig:bog}
\end{figure}

In figure \ref{fig:bog}, each row represents a document within the corpus\footnote{ Set of documents.} and each column a term within the vocabulary\footnote{ Set of words within the corpus.}. A value of 0 for any given element, $\mathbf{c_{i}r_{j}}$, within the matrix, indicates that the term $\mathbf{i}$ is not present within document $\mathbf{j}$. The value of each element within the matrix is dependent on how each term is weighted (as explained in Section \ref{3weight}).

\subsection{Bag of Concepts}

Another approach is to use feature extraction techniques to acquire a more complex representation of a document. This attempts to capture the meaning and context of the document, rather than simply treating words as individual features. 

An example of this would be N-grams representation, proposed by Dumais et al \cite{dumais}, which represents N amount of words per feature rather then just one. Another example would be using distributed clustering methods, as proposed by Baker and McCallum \cite{synonyms}, which attempt to merge synonyms together. 

Although these methods tend to radically reduce the size of the feature space, they have been shown to provide slightly worse classification accuracy \cite{concepts}. Due to the extra computation these bag of concepts methods require, and their generally negative influence on accuracy, the bag of words approach shall instead be used in this project.

\section{Term Weighting in a Feature Space} \label{3weight}

In order for terms to be represented in numerical form, they must be weighted according to some pre-determined method. This provides a numerical representation of a term's importance in relation to others in the document. Selecting an appropriate weighting method is crucial to accurately representing a document within a feature space. There are several ways in which this weighting can be determined:

\subsection{Binary}

This weighting simply indicates the presence or otherwise of a term within the document. The simplicity of this approach means that a lot of information about each term is lost, leading to an inaccurate representation of the document.

\subsection{Term Frequency} \label{tf}

This weighting represents the total number of occurrences of a term within the document. Although providing more information than binary representation, it too is a rather naive evaluation of a term's significance.

\subsection{TF-IDF Weighting}

TF-IDF is the most widely used weighting method \cite{bns}, combining term frequency (as described in Section \ref{tf}), with the number of occurrences in the entire corpus. This is calculated by multiplying a term's frequency within the given document (TF), by its inverse document frequency (IDF). This equation  can be expressed as: 

\begin{equation}
\mathbf{ {w}_{ij} = {tf}_{ij} \times \log{\frac{N}{n}}}
\end{equation}

where $\mathbf{N}$ is the total number of documents, $\mathbf{n}$ is the number of documents in which the term is present and $\mathbf{{tf}_{ij}}$ is the term frequency of word $\mathbf{i}$ in document $\mathbf{j}$. 

This means that regularly occurring terms, such as \textit{`and'} or \text{`the'}, will have a low weighting. Rarer words, which are more indicative of a document's true meaning, will therefore have greater discriminatory power.

\section{Feature Selection} \label{3sel}

Feature selection is the process of acquiring the set of terms that are most indicative of the meaning of a document. This has been shown by Yang and Pedersen \cite{compstudy} to be crucial to accurate text classification. There are various different feature selection techniques that are most commonly used.

\subsection{Stop-Word Removal}

Stop-word removal is the process of excluding words which carry no semantic value to the meaning of a document. These include commonly used terms such as `a', `the', `also'.  This has been shown by Silva and Ribeiro \cite{sw} to provide a significant improvement to both speed and accuracy of text classification methods.

\subsection{Word Stemming}

Word stemming refers to the process of reducing words to their root form. This method removes suffixes,\footnote{ Word endings such as `-ed' or `ing'} significantly reducing the dimensions of the feature space.

The most popular algorithm for implementing this process was proposed by Porter \cite{portpop,porter}, and will be the method used for this product.

\subsection{Document Frequency}

Another common method for feature selection is setting a minimum threshold for document frequency. This refers to the number of documents in which the term is present. This has been shown by Rogati and Yang  \cite{highperform} to also increase classification accuracy as it removes noisy terms that may only appear once or twice, a very common situation in text classification.

Selecting the threshold value must be done very carefully, and is particular to each classification task: the aggressive removal of rarer terms, as shown by Yang and Pederson \cite{compstudy}, can in fact decrease the likelihood of accurate classification. This is because some rarely occurring words can be highly indicative of a particular category. 

\subsection{Information Gain} \label{3ig}

Information gain is a more complex feature selection technique which calculates the importance of a term in defining whether a document belongs to a certain category. Although there are several different ways to represent this equation, Yang and Pederson \cite{compstudy} define it as:

\begin{center}
$\mathbf{ IG(t) = - \sum_{i=1}^{m} Pr(c_{i})\log{Pr(c_{i})}}$\\
  $\mathbf{+ Pr(t)\sum_{i=1}^{m} Pr(c_{i}|t)\log{Pr(c_{i}|t)}}$\\
  $\mathbf{+ Pr(t)\sum_{i=1}^{m} Pr(c_{i}|\bar{t})\log{Pr(c_{i}|\bar{t})}}$
\end{center}
 
where $\mathbf{t}$ is the term, $\mathbf{c_{i}}$ is a class and $\mathbf{m}$ is the total number of class.

As can be seen from the equation, it is the sum of the probabilities of a category occurring, the probability of the category given the presence of term $\mathbf{t}$, and the probability of the category given the absence of term $\mathbf{t}$. As with the document frequency, a threshold value is established, under which the term is removed from the vocabulary. This ensures that the term's presence helps in assigning a document to a category.


\section{Text Classification} \label{2tc}

As discussed in Section \ref{1textclass}, statistical categorisation will be used to classify text within the articles. There are various different statistical learning methods that can be used to perform text classification, some of which are described below:


\subsection{Support Vector Machines} \label{2svm}

Support Vector Machines are one of the most widely used text classification methods and the approach used in this project. Initially proposed by Cortes and Vapnik \cite{vapnik}, they separate two classes within a feature space by attempting to draw a hyper-plane between them.

\subsubsection*{Binary Classification}

Support Vector Machines (SVMs) involve separating data into two classes within an N-dimensional space.\footnote{where N denotes the number of features present in each document (as described in Section \ref{3dr})}  The principal aim is to identify a hyper-plane to separate the instances of each class within the feature space. When only two features present (thus a two-dimensional feature space), this can be represented by a line as shown in Figure \ref{fig:goodSVM}.

\begin{figure}[h]
\begin{center}
\includegraphics[width=3in]{./Figures/goodSVM.pdf}
\end{center}
\caption{Example of hyper-plane separating instances in a two-dimensional features space}
\label{fig:goodSVM}
\end{figure}

The hyper-plane can be described by $\mathbf{ w \cdot x + b = 0}$ where $\mathbf{w}$ is the normal to the hyper-plane and $\mathbf{b}$ is the bias. Implementing SVMs involves selecting w and b such that:

\begin{equation}
\mathbf{w \cdot x_i + b  \geq +1  \mbox{	  if } y_i = +1}
\end{equation}
\begin{equation}
\mathbf{w \cdot x_i + b  \geq -1   \mbox{  if } y_i = -1}
\end{equation}

where $\mathbf{y_i}$ indicates which class an instance belongs to. 

This allows for any unlabelled instances to be classified by determining on which side of the hyper-plane they fall. This decision function can be formally described as:

\begin{equation}
\mathbf{f(x) = Sign(w \cdot x +b)}
\end{equation}

where $\mathbf{Sign()}$ determines whether a value is positive or negative.

The problem is therefore to find the best choices of $\mathbf{w}$ and $\mathbf{b}$, such that the hyper-plane best divides the feature space.

\begin{figure}[h]
  \centering
  \subfigure[Possible Choice of Hyper-Plane]{\includegraphics[width=2.7in]{bad2SVM.pdf}}            
  \subfigure[Alternative Choice of Hyper-Plane]{\includegraphics[width=2.7in]{badSVM.pdf}}
  \caption{Examples of sub-optimal choices of Hyper-plane}
  \label{fig:badSVM}
\end{figure}

As can be seen in Figure \ref{fig:badSVM}, both hyper-planes represent different choices for those values, with neither providing the optimal fit shown in Figure \ref{fig:goodSVM}. This best fit is achieved when the margins between the hyper-plane and the nearest instances of each class, are maximised. This means maximising the distance between the support vectors.\footnote{ The vectors defined by the nearest instances of each class to the hyper-plane.} These can be defined by the following equations:

\begin{equation}
\mathbf{w \cdot x_i + b  = 1 }
\end{equation}
\begin{equation}
\mathbf{w \cdot x_i + b  = -1}
\end{equation}

The distance between these two vectors would thus correspond to the margin, meaning the optimal choice of $\mathbf{w}$ and $\mathbf{b}$ would involve maximising this value. This is achieved using a method proposed by Boser et al \cite{boser}, which performs Quadratic Programming optimisation. This obtains the values which optimally orientate the hyper-plane to separate the two classes.


\subsubsection*{Soft Margins} \label{3error}

The binary classification method defined above will attempt to put all instances of different classes on either side of the hyper-plane. This means that it will struggle to fit a hyper-plane to noisy data-sets which cannot be divided absolutely. %MAY ADD GRAPH
With most real life data containing some outlying instances, a positive slack variable $\xi$  is introduced which allows for some points from either class to appear on the incorrect side of the hyper-plane.

This adds a new parameter C to the value optimisation problem, which dictates how much of a trade-off is made between slack for misclassified instances, and size of the margin between the hyper-plane and support vectors. This parameter will also ensure that the hyper-plane does not over-fit the training data by attempting to incorporate outlying instances that will distort the position of the hyper-plane.

\subsubsection*{Kernel Methods} \label{3ker}

\begin{figure}[h]
\begin{center}
\includegraphics[width=3in]{./Figures/nonsep.pdf}
\end{center}
\caption[Example of non-linear instances]{An example of a set of instances which cannot be linearly separated.}
\label{fig:nonlinear}
\end{figure}

There are many classification problems for which the data is not linearly separable, an example of which can be seen in Figure \ref{fig:nonlinear}. This means it is not possible to create a hyper-plane, even with soft margins, that will accurately separate the two classes. 

In order to resolve this issue, instances are converted from a low to high dimensional feature space using a non-linear mapping function. This is commonly known in Machine Learning as the `kernel trick' \cite{intro}. An example of this would be converting $\mathbf{x \in R^3}$ to $\mathbf{\phi (x) \in R^7}$ using the mapping function:


\begin{equation}
\mathbf{\phi (x) = (1, \sqrt[2]{2}x_1, \sqrt[2]{2}x_2, \sqrt[2]{2}x_3, x_1^2, x_2^2, x_3^2)}
\end{equation}

This allows data that may have been intermeshed in a lower dimensional space, to be linearly separable.  There are many different kernels that can be used, with the most popular kernel functions being the Radial Basis Kernel\footnote{ Also known as the Gaussian Kernel.} (RBF) and the Polynomial Kernel. When dealing with a large set of data with a considerable number of features, such as in text classification, non-linear kernels are known to slow down the classification process considerably when compared with the linear approach \cite{prac}.

\subsubsection*{Multi-Class Classification}

There are several different methods for using the binary classification techniques to perform multi-class classification. These include `one-against-all',  `one-against-one', and directed acyclic graph SVMs (DAGSVM), as proposed by Platt et al \cite{platt}. 

With one-against-one and DAGSVM achieving similar results in a comparative study by Hsu and Lin\footnote{ Both were superior to `one-against-all'} \cite{hsu}, the former shall be used in this project.

%***INSERT EXAMPLE OF ONE AGAINST ONE***

`One-against-one', first proposed by Knerr et al \cite{knerr}, involves the process of producing a binary model for each combination of classes. When a new instance is to be classified, it is inserted into each of these models, with whichever class proving most successful being the one selected. 

\subsection{Other Methods}

\subsubsection*{Naive Bayesian}

Naive Bayes classifiers use a probabilistic model to calculate the likelihood of a document belonging to a particular class. This can expressed as:

\begin{equation}
\mathbf{Pr(C|x) = \frac{Pr(x|C)Pr(C)}{Pr(x)}}
\end{equation}

where C is the class and x is a document represented as a feature vector.

The classification process is simplified by making the assumption that features in the document are unrelated, with each contributing independently to the probability estimate.  The document is assigned to the class for which $\mathbf{Pr(C|x)}$ is greatest.

This relatively simple method has been shown by Lewis \cite{lewis} and others to perform text classification to a high degree of accuracy and to work well with the significant number of features present in text documents.


\subsubsection*{k-Nearest Neighbour}

k-Nearest Neighbour (KNN) is one of the simplest statistical learning methods and shown by Yang \cite{yangknn} to perform well for text classification. It is based on the assumption that documents near to each other within the feature space are likely to belong to the same class. 

As the name implies, KNN classifies a document by comparing it to its closest neighbours within the feature space, with k denoting the number of comparisons to be made.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{./Figures/knn.pdf}
\end{center}
\caption[Example of k-Nearest Neighbour Classification]{An example of 3-Nearest Neighbour Classification on an unlabelled instance \textit{p}}
\label{fig:knn}
\end{figure}

In Figure \ref{fig:knn}, three nearest neighbour classfication is performed, with point p being compared to the three  closest to it. With two of them belonging to the negative class, p is thus also classified as negative.

\section{Topic Categorisation} \label{3topic}

In order to automatically organise articles into the sections of the newspaper they are most relevant to, topic categorisation must be performed. The simplest method of determining an article's topic would be to use any existing information provided by the source. With no common standards yet adopted by web publishers for including semantic information with their content, this task will be performed using a combination of methods.

\subsection{Link Analysis} \label{3la}

Among the basic information provided by RSS feeds, the structure of the hypertext link can be used to perform simple topic analysis. The advent of content management systems has brought structured links to news content, with several major web publishers organising their articles according to category.

This means that the presence of the string \textit{'/sport/'} within a link to an article is strongly indicative of the article belonging to the `Sports' category. It must be noted that this is different to searching for the string \textit{`sport.'} This more naive search would wrongly incorporate links in which the word is used in a different context. An example of this would be the headline \textit{ `New Acer Aspire-One Netbooks Sport Beefy Specs, Low Price.'}\footnote{ The link to this article is \url{http://www.pcworld.com/article/165045/new_acer_aspire_one_netbooks_sport_beefy_specs_low_price.html}}

Links of this variety are, however, only present for a very small set of news sources and can therefore not be relied upon to perform topic categorisation.

\subsection{Content Analysis} 

The categorisation of the article contents shall be carried out using Support Vector Machines (as described in Section \ref{2svm}). It is shown to be one of the best statistical learning methods for text classification, as it is able to deal with a large number of features and multiple classes \cite{joa}.

\section{Location Categorisation}

In order to establish an article's relevance to a given location, a combination of techniques will be used to provide a structured ranking system.

\subsection{Source Analysis}

Through significant analysis of many local news stories, it can be noted that an article relevant to a particular geographical area will rarely be from a news source based far away from it. It is even less common, unless it is the site of a significant global event, for the news source be located in a different country entirely. It is therefore possible in the ranking system to negatively weight news sources not based in the same country as the specified location.

\subsection{Location Extraction} \label{3lex}

Rather then analysing the whole content of the article, it is possible simply to extract any locations referenced within it. Entity extraction from unstructured text is an area of computational linguistics which has been very heavily researched and for which many methods currently exist \cite{entitynews,entitytext,entityweb}. Most of these however are quite complex and computationally expensive.

With geographical entities always possessing a capitalised first letter, it is possible just to compare any capitalised word or phrase against a database of location names. This will allow for a list of locations to be extracted, also known as a `bag of locations' (similar to the bag of words representation described in section \ref{3bog}).

Once these have been acquired, there are several approaches that can be taken to calculate relevance to a particular location. These methods assume no other prior knowledge for a location other than its name and GPS (Global Positioning System) coordinates.

\subsubsection*{Proportional Location Referencing} \label{3plr}

This method determines the relevance to a particular location by dividing the number of references to it, by the total number of geographical references located in the document. 

As well as direct references to a particular location, any references to other entities associated with it, such as alternative names, neighbourhoods or famous landmarks, are also included. 

Although this method does not necessarily capture the full concept of the article, it requires very little prior knowledge other than a list of relevant geographical terms. For this project, it shall be used in conjunction with content analysis (see Section \ref{3ca}) to provide a ranking of relevance to a particular query.

\subsection{Content Analysis} \label{3ca}

The most generic way of establishing an article's relevance to a particular location is to analyse the content of the article itself. The most common approach is to extract any geographical references from the text (as described in Section \ref{3lex}) and compare them to a pre-defined geographical taxonomy \cite{dogeo}. This is the method implemented by Google News.

Although this will maximise recall,\footnote{ The fraction of relevant articles that will be correctly classified.} it will also lead to a significant number of false positives.\footnote{ Articles incorrectly being associated to a location} An example of such a case would be articles about `Bristol Palin'\footnote{ Daughter of former US Vice-Presidential Candidate, Sarah Palin} being considered strongly relevant to Bristol, UK.

Another problem with this method is its failure to classify any articles which do not contain any direct geographical references: An article about a local concert may not necessarily reference any geographical entities, despite its significant relevance to a particular location.

An alternative approach, and the one used in this project, is to perform text classification using location as a topic. This will allow for the terms associated with a particular location to be extracted from the training data, rather than being pre-determined by the user. This will allow terms unrelated to geographical entities to help indicate the relevance to a particular location.

\section{Finding Neighbouring Locations} \label{3fnl}

In order to expand the scope of the initial search query if not enough articles are obtained, the nearest neighbouring locations will need to be discovered. 

\begin{figure}[h]
\begin{center}
\includegraphics[width=2.5in]{./Figures/search.pdf}
\end{center}
\caption[Neighbour Detection for Bristol, UK]{An example of searching for neighbouring locations to Bristol. [Source: Google Maps]}
\label{fig:3bris}
\end{figure}

In order to carry out this process, GPS coordinates will be used. All locations within a pre-defined radius, as shown in Figure \ref{fig:3bris}, will be acquired  and organised in order of proximity to the target location. If this still does not produce a sufficient amount of articles, the process is repeated with a greater search radius.

\subsection{Location Database}

In order to carry out this process in real time, GPS coordinates of locations will have to be acquired quickly and efficiently. Despite the presence of APIs for both Google Maps\footnote{ Google Maps API, URL: \url{http://code.google.com/apis/maps/} [Last Accessed: 09/03/09]} and Microsoft's Live Maps\footnote{ Microsoft Virtual Earth, URL: \url{http://dev.live.com/VirtualEarth/} [Last Accessed: 09/03/09]}, neither has the functionality to support the automatic detection of neighbouring locations.

A location database shall therefore be created to allow the program to perform this task. It shall be specifically structured in order to obtain the GPS coordinates given the name of a location. It will also allow for the names of locations to be obtained, given a range of coordinates, in order to acquire the nearest neighbours.  

