% Chapter 4

\chapter{Project Execution}
\label{Chapter4}
\lhead{Chapter 4. \emph{Project Execution}}

This chapter provides an overview of how the local newspaper generator was built, including further description of how each of the major components of the project was implemented. This involved the acquisition of news articles from RSS feeds, the analysis of their location and topic, as well as a tool to broaden the scope of a location-based search if insufficient results are returned initially.

\section{Development Approach}

\subsection{Language Selection}

The Java programming language was used to implement this project in order to take advantage of its object-orientated nature and ensure that each component of the system could be separately developed. It also allowed the use of libraries and packages available for Support Vector Machines\footnote{ LIBSVM, a library for support vector machines. \url{http://www.csie.ntu.edu.tw/~cjlin/libsvm} [Last Accessed: 17/05/09]} and HTML parsing\footnote{ Swing HTML Parser. \url{http://java.sun.com/products/jfc/tsc/articles/bookmarks/index.html} [Last Accessed: 20/05/09]}.

\subsection{Content Revision}

Subversion,\footnote{ Available from http://subversion.tigris.org/} an open source content revision system, was used to allow the project to be developed from multiple computers and prevent any hardware failures from impeding its progress.

\section{Product Overview}

\begin{figure}[htbp]
		\begin{center}
		\includegraphics[width=5in]{./Figures/delivery.pdf}
		\end{center}
	\caption[A High Level Overview]{A high-level overview of the product}
	\label{fig:delivery}
\end{figure}

The product is a complete package which, as can be seen in Figure \ref{fig:delivery}, will take a query in the form of a location, acquire a set of articles and organise them by topic and in order of relevance to the query. If an insufficient number of articles are obtained for the desired location, the query is expanded to incorporate its nearest neighbours (as discussed in Section \ref{4nn}). Once this process is complete, the articles are returned in the form of a web page. An example of the output can be seen in Appendix A.

The tasks to be completed by the product can be divided into four broad categories, as described in Figure \ref{fig:delivery}. The following sections will describe each of these components, their design rationale and the technical challenges faced when implementing them.

\section{Collection of news articles} \label{colart}

There are two methods of collecting articles that could be used by this product:

\begin{itemize}
\item \textbf{Pre-Crawling:} This refers to using web crawlers to search for news feeds, indexing and classifying new articles as they are discovered. These articles would then be stored in a database associated with the product. When a user query is received, the database would be searched to produce the newspaper, minimising the user's waiting time.
\item \textbf{Real-Time:} This approach involves acquiring and classifying articles on a per-query basis. This does require a database to be established as the program will identify and extract the articles in real time.
\end{itemize}

All news aggregators use the pre-crawling approach as any commercial web site must produce results as quickly as possible. Speed however, is not a major concern for this project. The real-time approach is therefore used to provide the greatest flexibility in the number and variety of articles that can be acquired.

Although the product and its classification techniques will work with any RSS feeds containing news articles, for the purposes of maximising the number of articles produced, Google News is queried for a list of articles it believes to be relevant to the location. With its inaccurate location classification, this provides a suitable set of articles on which this product's classification techniques can be tested. This allows for a value added service to be provided on top of the aggregation currently offered by Google News. 

\subsection{Collection Process}

\begin{figure}[htbp]
		\begin{center}
		\includegraphics[width=5.5in]{./Figures/collect.pdf}
		\end{center}
	\caption[The Article Collection Process]{An overview of the steps taken to acquire news articles for analysis}
	\label{fig:collect}
\end{figure}

The collection process carried out by the product is split into three principal steps, as can be seen in Figure \ref{fig:collect}. For the first two steps, RSS feeds are acquired and then parsed in order to obtain the articles they contain. This provides the link and article segment used to extract the full article contents (as described in Section \ref{tarss}).

\subsubsection*{Extracting full article content}

In order to extract the full content of each news article, its containing web page is parsed and the extract from the RSS feed is used to detect its location within the page. Only one sentence is acquired from the extract in order to ensure that it will be contained within same HTML tag.

The HTML tree is searched, and when the tag containing the extract is detected, its type and depth within the tree are recorded. The parent node is located and all text within its child nodes of the same tag type, and at the same depth within the tree, is identified as the content of the article. Any text within further child nodes that are formatting tags, such as bold or italics, is also extracted as article content.

With the implementation of this approach being based on the Java Swing libraries, it is fully dependent on the page being properly formatted according to W3C standards.\footnote{ Full standard described at \url{http://www.w3.org/TR/xhtml1/} [Last Accessed: 26/05/09]} This means that any unrecognised tags will stop the page from being parsed. The results of this approach are fully evaluated in Section \ref{5vace}.

\section{Selection and Classification of Acquired Articles}

Once a list of articles is acquired, the selection process determines which section of the newspaper each article should be inserted into. Its relevance to the desired location is evaluated and used to order each topic list. This requires topic and location classification techniques that form the most significant component of this implementation. 

\subsection{Scope}

\subsubsection*{Topics}

The current prototype assigns articles into one of four sections: `News,' `Business,' `Sports,' and `Entertainment.' Each has its own set of training data, with the `News' category encompassing all topics not covered elsewhere. These sections are not hard-coded into the system, allowing for additional ones to be easily added, as long as sufficient training data for that topic is provided.

\subsubsection*{Location Ranking} \label{scope:loc}

As with topic classification, the number of locations covered is dependent solely on the sets of training data provided. The current prototype implements location classification for four locations: Bristol, Durham, Manchester and London. This set is chosen to represent UK towns and cities of various sizes and geographical locations. 

\subsection{Selection Process}

\begin{figure}[h]
		\begin{center}
		\includegraphics[width=5.5in]{./Figures/selection.pdf}
		\end{center}
	\caption[The Article Selection Process]{An overview of the steps taken during the selection process.}
	\label{fig:select}
\end{figure}

As can be seen in Figure~\ref{fig:select}, topic and location classification is done sequentially and involves similar methods. The difference between the two is that topic classification is used to assign an article to a section of the newspaper, whereas location classification is used to order articles in those lists by their relevance to the query. 

\subsubsection*{Topic Analysis}

In order to identify the topic, link analysis is first performed (as discussed in Section \ref{3la}). All remaining articles are then assigned to a class using a trained SVM (Support Vector Machine) model.

\subsubsection*{Location Analysis}

A ranking is assigned to each article in order to decide its relevance to the queried location. Although principally using the decision value given by SVM classification, proportional referencing (as discussed in Section \ref{3plr}) also plays a role. The ranking is calculated using the following equation:

\begin{equation}
\mathbf{ Rank(x) = (DV(x) \times 100) + PLR(x)}
\label{eq:rank}
\end{equation}

where $\mathbf{x}$ is the article, $\mathbf{DV(x)}$ is the decision value provided by the SVM model and $\mathbf{PLR(x)}$ the result of proportional referencing.

As can be seen in Equation \ref{eq:rank}, proportional referencing will influence the rank only when the decision values are almost equal. This is because $\mathbf{DV(x)}$ was found to be a significantly better indicator of relevance to a location. $\mathbf{PLR(x)}$ is used simply to separate articles with near equal decision values and provide a rank when dealing with location for which the SVM model is not trained (see Section \ref{scope:loc}).

\subsubsection*{Location Extraction}

In order to calculate the proportional referencing value, a `bag of locations' is obtained from each article. This acquires any capitalised word or phrase, and checks to see if they are locations, using the data structure established for query expansion (see Section \ref{4nn}). 

Checking every capitalised word produces a significant volume of redundant comparisons, as it also includes any word that starts a sentence, or any proper nouns. For the purposes of this project, however, this method is sufficient and does not add a significant computational overhead.

\subsection{Classification Using Support Vector Machines}

\begin{figure}[htp]
		\begin{center}
		\includegraphics[width=5.5in]{./Figures/classification.pdf}
		\end{center}
	\caption[The SVM Classification Process]{An overview of the steps taken to use SVMs to perform classification.}
	\label{fig:select}
\end{figure}

Although the results are used in different ways, both the topic and location analysis stages use the same process for classifying using Support Vector Machines, outlined in Figure~\ref{fig:select}. This can be further divided into the Training and Classification processes.

\subsection{Training Process}

The training process can be divided into three stages:

\begin{enumerate}
\item \textbf{Pre-processing:} Training articles are first parsed and converted into a list of terms for each document. These lists are then reduced by performing stop-word removal\footnote{ Using a modified stop word list acquired from WEKA, \url{http://weka.wiki.sourceforge.net/} [Last Accessed 22/05/09]} and word stemming.\footnote{ Using a Java implementation of Porter's Algorithm \cite{porter}  acquired from, \url{http://tartarus.org/~martin/PorterStemmer/} [Last Accessed 18/05/09] }
\item \textbf{Feature Selection:}  Once all documents have been parsed and the pre-processing stage completed, feature selection is performed using either the document frequency or information gain thresholds (as discussed in Section \ref{3sel}), depending on which is selected. Once this step has been completed, the corpus and vocabulary for the training data are stored, in order to be used during the classification process. 
\item \textbf{SVM Model:} TF-IDF values are then calculated, with each document vector organised in order of the feature's appearance within the vocabulary,\footnote{ A pre-requisite of the SVM library used.} and then scaled. The kernel parameters (such as the Error rate) are then optimised (as discussed in Section \ref{3error}) and the SVM classification model produced.
\end{enumerate}

\subsection{Classification Process}

There are two main stages to the classification process:

\begin{enumerate}
\item \textbf{Feature Extraction:} Each article's content is parsed and converted into the feature space. Only features that appear in the training corpus are included, as they are the only ones which are mapped in the SVM model.
\item \textbf{Classification}: Once the features have been extracted, they are scaled relative to the training model to ensure similar values for classification. The SVM model is then used to predict which class each article belongs to, and produce a decision value indicating the certainty that the article belongs to its newly assigned class.  
\end{enumerate}

\section{Query Expansion} \label{4nn}

If any of the four sections of the newspaper contain an insufficient number of articles for the desired location, neighbouring locations are identified and, in order of geographical proximity, sent through the collection and selection processes. This is repeated until a full set of articles is present for each topic, providing the user with a complete newspaper.

\subsection{Location Storage}

Using a table of the 1000 largest UK towns, acquired from an online Gazetteer,\footnote{ A geographical dictionary obtained from \url{http://world-gazetteer.com/} [Last Accessed: 27/05/09} detalied information is stored within two data structures to ensure quick and efficient access by either name or GPS coordinates:

\begin{enumerate}
	\item The first structure is a simple \textit{hashset} with the key being the location name. This allows all geographical information about the initial query to be acquired as quickly as possible.
	\item The second structure is a more complex hashtable of hashtables, with the keys being the longitude and latitude GPS co-ordinates.  The second hashtable contains a list of the locations situated within those co-ordinates. 
	
	This divides the locations into a grid structure, with the length of each segment representing the minimum radius of search for nearest neighbours. This means that to perform a search, at most four queries will be made to the data structure: in the worst case scenario the location will be in the corner of its segment, meaning three neighbouring segments must also be acquired.
\end{enumerate}

This approach was used to maximise the speed of location acquisition, whilst also balancing size concerns. Although a large two-dimensional array would provide the quickest access to locations, it would be a significant memory overhead: a large number of elements within it would be empty as not all segments of the grid contain locations.

\section{Rendering}

When a sufficient number of articles are present for each topic, the results are rendered into a fully stylised web site, displaying the articles as ordered by their relevance to the desired location. This is achieved by taking the list of articles for each topic and inserting their contents within HTML tags into pre-defined templates. 

The user is thus able to view the results and navigate between each section. An example of this web site is included in Appendix A.

