% Chapter 3

\chapter{Technical Basis}
\label{Chapter3}
\lhead{Chapter 3. \emph{Technical Basis}}
                      
%		A section dedicated to describing the technical basis on which the
%        project depends.  The aim of this section is to explain the specific
%        problem the project addresses in detail, and the describe previous 
%        and related work in the area (e.g., the technical details of alternate
%        solutions or algorithms which you use later on).  Explain these
%        technical aspects in a clear a concise manner.  After reading this
%        section, one should have the background required to understand the
%        implementation of the project and assess how appropriate and novel 
%        the proposed approach is.

This section describes the technical theory used in order to build this local newspaper generator. This includes the explanation of alternative methods and why they were not selected. The particular issues that will be addressed are:

\begin{itemize}
\item The aggregation and extraction techniques used to acquire articles for analysis. 
\item Vector representations of documents as well as the different methods for selecting and extracting the best features for text classification. 
\item Statistical classification techniques with a detailed explanation of Support Vector Machines. 
\item The topic and location categorization techniques that will be used.
\item Neighbour detection for expanding the scope of search for relevant articles.  
\end{itemize}

\section{Online Content Monitoring}

The first task the product will need to be perform is the acquisition of content from online sources. This is essential in order to provide sufficient data for the testing of the proposed topic and location categorization techniques. There are two distinct methods of collecting and monitoring online news.

\subsection{Web crawling}

The naive approach to collecting news articles is to use web crawlers that search through online news sources and index any new articles detected. This approach is quite limited as it requires a crawler capable of automatically differentiating between news articles and the other web pages a news source may contain. These crawlers must therefore be manually tailored to each source and constantly updated to ensure it continued accuracy.

With no knowledge of when these sites are updated, the crawlers would simply have to be relaunched at regular intervals in order to acquire new content. This `hit and miss' approach would therefore not be the optimal choice for this task.

\subsection{RSS Feed Monitoring}

As discussed in the previous chapter, RSS feeds provide syndicated content from news sources. Using these feeds would therefore solve the issues faced by using web crawlers, as they are automatically updated with new content and only provide links to news articles. This removes any delay between article publication and detection by our program.

% and also enabling the use of the extra metadata provided

Another benefit of using this approach is the metadata attached to each article\footnote{ RSS feeds will almost always provide the title, publication date and an extract from the article} which can provide useful information knowledge to the extraction and classification processes. 

With almost all online content publishers providing these feeds, they shall be the principle form of input into the program. 

\section{Article Content Extraction}

Although RSS feeds provide links to new articles, they rarely provide the full text required for proper analysis and classification. This information therefore needs to be extracted from the web page the article is contained in.

This is not a trivial task as there are no general standards for the way news is published online. With web pages containing significant amounts of redundant information, simply extracting all text from the page would lead to very noisy data and thus inaccurate classification. There are many different approaches that have been proposed for solving this problem.

\subsection{Visual Wrapper Approach}

The simplest method is to define a schema for each particular news source which defines exactly where the relevant information\footnote{ Refers to the article's title, author and full content} is located within the web page. 

Also known as a visual wrapper, this approach works on the assumption that every online news source possesses a template for its news stories, meaning the article content will always appear in the same location on the page. 

These schemas need to be manually constructed for each new source and updated each time its template is altered, limiting scalability. Some approaches have been suggested that automate the process of learning these wrappers, such as those suggest by Zheng et al \cite{vs} and Meng et al \cite{zwrap}. They however are dependent on manually labeled training data from each source, thus also suffering from a lack of adaptability to new templates.

With the large amount of disparate sources that local news can be acquired from, especially when attempting to cover a large amount of locations, this would not be a viable approach for this project. 

\subsection{Extraction Using Tree-Edit Distance}

Reis et al \cite{treeedit} use the consistency HTML DOM trees\footnote{ As explained in section ***.}, and the concept of Tree-Edit distances \cite{tanaka}, to automatically extract news from web sites. 

A form of visual wrapper detection, it generates an extraction pattern from clustered pages which works on the assumption that each news source shares a common template.

This process is done by getting a set of example web pages for each source and using the DOM tree to convert them into ordered rooted trees\footnote{ Trees that possess a fixed root node and whose order is defined}. These trees are then compared against each other by calculating the minimum amount of operations needed to transform one into the other, a concept formally defined as mapping by Tai \cite{tai}. This allows it to detect the common format and layout characteristics of a news source to automatically extract the content of news articles.

This method shares the same problems as other visual wrapper approaches, relying on manually labeled training data. It also has only been reported to produce 87.71\% accuracy in news extraction \cite{treeedit}, significantly lower then many other proposed methods.

\subsection{Perception-Oriented News Extraction}

Chen and Xiao \cite{percep} have proposed an extraction approach based on the visual perception of a web page. This method converts web pages into a Function-Object Model (FOM) \cite{fom}, which attempts to extract the function of each element of a web page rather than simply its content. 

This approach works on the assumption that web sites are designed for human's to easily perceive the different element it contains. It therefore uses the FOM model to simulate a human's visual perception of the page and automatically extract the article content.

Although it can be used more generically than visual wrappers, and is reported to have 99\% extraction accuracy \cite{percep},  it requires a web page's conversion into the FOM model which is a computationally expensive task. This process is also not well documented and subject to a patent by Google \cite{fompatent}, meaning that any product adopting this approach would not be able to be used commercially.

\subsection{Tree Analysis using Prior Knowledge}

Although RSS feeds tend only to provide an extract from the article, it is possible to use this information to obtain its full contents without the need to build a complete schema for the news source.

Previous work in this area has identified that the contents of an article is usually encompassed within a larger structural element, such as a DIV\footnote{ Defines a division or a section in an HTML document.}. When representing that web page as a DOM tree (as discussed in section ***), the article content is thus contained within one particular branch, as shown in Figure ***, whose root would be the DIV tag.

*** INSERT IMAGES SHOWING BRANCH ****

Using the axioms defined by Chen and Xiao \cite{percep}, coupled with the structural analysis of many different online news articles, it also possible identify an article's full text as being located in HTML tags at a constant depth within the DOM tree.

Using the extract of the article provided by the RSS feed it is thus possible to locate the branch of the DOM tree containing the article text, and use the depth at which is was discovered to ensure its full contents is obtained. 

Although this method is dependent on being provided with an extract from the article, it does not require any training data or tailoring to individual news sources, allowing it to be fully scalable. With RSS feeds being used as the input the product, it is thus the optimal choice fro this project.

\section{Document Representation}

In order to use statistical text classification methods, documents must be represented in a numerical form. This is the process of taking document D, and placing it in a vector space, as first proposed by Salton \cite{salton}. 

**INSERT EQUTION FOR DOC VECTOR***

where F is the set features extracted from the contents of the document. 

%The choice these features to represent is key to the performance of any text classification method. 
There are two predominantly used representations of a document's feature space.

\subsection{Bag of Words}

The most common approach is to represent a document as a bag of words. This refers to converting text into a list of the terms contained within it. This list is then represented as a sparse matrix as can be seen in figure ***

***inserted figure of matrix***

In figure *** each row represents a document within the corpus and each column represents a word within the vocabulary. Each row could therefore the feature vector for the given document. Each column will have a value, 0 if it is not present in that particular document. The value of each term is dependent on how it is represent which is discussed in section ***  

\subsection{Bag of Concepts}

Another approach to is to use feature extraction techniques to acquire a more complex representation of a document. This attempts to capture the meaning of the text rather than treating the words as individual features. 

Examples of these representations include N-grams\footnote{ A method which represents N amount of words per feature rather then just one.}, proposed by Dumais et al.\cite{dumais}, and distributed clustering methods, proposed by Baker and McCallum \cite{synonyms}, that attempt to merge synonyms together. Although these methods tends to radically reduce the size of the feature space, they generally provide slightly worse classification accuracy \cite{concepts}.

With the extra computation these methods require, and generally negative influence on accuracy, the bag of words approach shall be used in this project.

\section{Term Weighting}

In order for terms to be represented in numerical form, a weighting must be assigned  according to some pre-determined method. This means judging a term's importance in relation to others within the document. 

When converting documents into a bag of words representation, there are many different ways of representing the value of each term.

\subsection{Binary}

This weighting simply indicates the presence or otherwise of a term within the document. This simplicity means that a lot of information about each term is lost, leading to an inaccurate representation of the document.

\subsection{Term Frequency}

This weighting represents the total number occurrences of a term appears within the document. Although providing more information than the binary representation, it too provide a rather naive evaluation of aterm's importance to the document.

\subsection{TF-IDF Weighting}

The weighting method most widely used combines term frequency above, with its frequency within the global corpus.\footnote{ The corpus is the set of terms present within the represented documents} This is calculated using the following formula:

***calc of tf idf***

where $\mathbf{N}$ is the total number of documents, $\mathbf{n}$ is the number of documents in which the term is present and $\mathbf{w_i_j}$

Details fo the equations. Thisvalue prpvides to most accurate account of a terms true weighting within a corpus, which is why it shall be used for this project.

%\subsection{Bi-Normal Separation} IF I NEED FILLER

\section{Feature Selection}

%Speed up algortihms and remove space requirements
With more features producing a larger amount of features to the dimension space, it is important to carry out feature selection in order to remove the words that will not aid in the classficaiton process, instead simply slow it down. ***WHY INCLDUE THIS*** Most of these techniques have been evaluated for text classification by Yang and Pedersen \cite{compstudy}.

\subsection{Stop-Word Removal}

The quickest way to carry out this process is to simply remove words that carry no semantic value or provide any context to a document. These include 'a', 'the', 'also'.  This has been shown by Silva and Ribeiro \cite{sw} to provide a signifcant improvement to both speed and accuracy of text classficiation methods.

\subsection{Word Stemming}

This refers to the process of removing suffixes to words in order to group them together. This removes things such as -ED, -ING, -ION, -IONS reducing the dimensionaltiy significantly.

The most popular algrothmin infromation retrieval was proposed my Porter \cite{portpop} \cite{porter} and does the stemming process in 3 stages. ***To  be decribed later***

\subsection{Document Frequency}

Setting a minimum threshold for the amount of documents a term must be present in order to be added to the training vocabulary. This significantly reduces amount of features represented and ensures that rarely used words do not skew the training sets for a particular category. ***give example maybe*** This has been shown by Rogati and Yang  \cite{highperform} to also increase classficcation accruacy.

Selecting the threshold value must be done very carefully and is particular to each classification task as the aggresive removal of rarer terms, as shown by Yang and Prderson \cite{compstudy}, can if fact decrease the likelihood of acurate classifcaiton as it may remove terms that are inf act simly highly indicative of a particular caetgory. 

\subsection{Information Gain}

Information gain is a more complex feature selection technique which calculates how well the presence of a term in the document indicates belonging to certain category. Although there are many repesentations of how to calculate it \cite{compstudy} define it below in figure ***

 **** Insert information gain equation****
 
 Describe the equation. It is the sum fo the probability of a document occuring in the category, the likelihood that a doc is of that category given the presence of the word, and the likelihood that is not given the term. As with the document frequency, a threshold value is established under which the term is removed from the vocabulary. This ensures that the terms presence aides in assigning a document to a category.
 
 % PERHAPS TALK ABOUT CHI TEST

%\section{Feature Extraction}
%
%\subsection{Noun Extraction}
%

\section{Text Classification}

In order

%In order to ensure the scalability of the product and avoid 

\subsection{Support Vector Machines}

Support Vector Machines are one to the most widely used text classification methods that use statistical learning theory. Intially conceived by Cortes and Vapnik \cite{vapnik}, it uses kernel techniques to increase the dimension of a feature space and seperate classes of data using a hyperplane.

\subsubsection*{Binary Classification}

This technique principally involves separating data into two sets  of vectors in an N-dimensional space, where N denotes the amount of features present in each vector. For its inteded use in this project, each document is represented as a vector of terms (using the bag of words approach). The basic idea is to seperate the instances of each class using a hyperplane. If only two features are present, this can be represented by a line as can be seen in figure ***.

***Insert figure representing hyperplane****

The hyperplane can be described by $\mathbf{ w \cdot x + b = 0}$ where $\mathbf{w}$ is the normal to the hyperplane and $\mathbf{b}$ is the bias. Implementing SVM involves selecting w and b such that:

\begin{equation}
\mathbf{w \cdot x_i + b  \geq +1  \mbox{	  if } y_i = +1}
\end{equation}
\begin{equation}
\mathbf{w \cdot x_i + b  \geq -1   \mbox{  if } y_i = -1}
\end{equation}

where $\mathbf{y_i}$ indicates which class an instance belongs to. Combining these equations give the more general equation:

\begin{equation}
\mathbf{y_i(x_i \cdot w + b) \geq 1 \forall i}
\end{equation}

This means that in order to classify any new instances, it simply has to be seen on which side of the hyperplane they fall. This decision function can be formally described as:

\begin{equation}
\mathbf{f(x) = sign(w \cdot x +b)}
\end{equation}

meaning that the sign of the result will indicate the class of the instance. This means that arguments are invariant to rescaling as it would not effect whether the result was positive or negative. The problem for is therefore to find the best choices of $\mathbf{w}$ and $\mathbf{b}$, such that the hyperplane best divides the feature space.

**INSERT FIGURES OF HYPERPLANE CHOICES***

As can be seen in figures ***, both represent different choices in those values with the hyperplane on the right clearly providing a better fit to the data. This best fit is achieved when the margins between the hyperplane and the nearest instances is maximised. This means maximising the distance between the suppor vectors which are the instances of both classses nearest to the hyperplane . These corresponding planes can be defined by the follwing equations:

\begin{equation}
\mathbf{w \cdot x_i + b  = 1 }
\end{equation}
\begin{equation}
\mathbf{w \cdot x_i + b  = -1}
\end{equation}

The distance between these two planes would thus correspond to the margin, meaning the optimal choice of $\mathbf{w}$ and $\mathbf{b}$ would involve maximising this value. This is acheived using a method proposed by Boser et al \cite{boser} which performs Quadratic Programming optimization in order to obtain the values which optimally orientate the hyperplane to seperate the two classes.

\subsubsection*{Soft Margins}

Using the binary classification methoid defined above will attempt to put instances of different classes on either side of the hyperplane. This means that it deos not generalize well to data which contains noise, leading it to not be linealry seperable. With most real life data being of this variety, a positive slack variable $\xi$  is introduced which allows for some points from either class to appear on the incorrect side of the hyperplane.

This adds a new parameter C, to the value optimisation problem, which dictates how much of a trade-off is made between slack for misclassifed instances, and size of the margin. This parameter also ensure that the hyperplane does not overfit the training data which could contain outliers that reduce classification performance.

\subsubsection*{Kernel Methods}

***INSERT IMAGE OF ROUND CLASS***

There are many classification problems for which the data is not linearly seperable, such as data shown in figure ***, meaning the it is not possible to create a hyperplane, even with soft margins, that is able to function correctly. In order to resolve this issue we use what is known as the kernel trick \cite{intro}. This involves moving instances from a low to high dimensional space using a non-linear feature mapping function. An example of this would be converting $\mathbf{x \in R^3}$ to $\\mathbf{phi(x) \in R^7}$ by:

\begin{equation}
\mathbf{\in(x) = (1, \sqrt[2]{2}x_1, \sqrt[2]{2}x_2, \sqrt[2]{2}x_3, x_1^2, x_2^2, x_3^2)}
\end{equation}

This allows data that may have been intermeshed in a lower dimensional space to be linearly separable.  There are many different kernels that can be used, with the most popular kernfunctions beignthe Radial Basis Kernel\footnote{ Also known as Gaussian Kernel.} (RBF) and the Polynomial Kernel. When dealing with a large set of data with a considerable number of features, such as in text classification, non-linear kernels are known to slow down the process considerably when compared with the linear approach \cite{prac}.

\subsubsection*{Multi-Class Classification}

There are several different methods of using the binary classification technique of SVMs to perform multi-class classification. These include  one-against-all,  one-against-one, and directed acyclic graph SVM\footnote{ Also reffered to as DAGSVM}, as proposed by Platt et al \cite{platt}. With one-against-one and DAGSVM acheiving similar results in a comparative study by Hsu and Lin \cite{hsu}, with both being superior to one-against-all, the former shall be used in this project.

One-against-one, first proposed by Knerr et al \cite{kneer}, involves the simple process of comparing each class against the other, using binary SVM classification and assigning the class which is the most succesful.


\subsection{Other Methods}

\subsubsection*{Naive Bayesian}

Naive Bayes classifiers use the probabilistic model of text to calculate the probability of a document belonging to a particular class. This can expressed as:

\begin{equation}
\mathbf{Pr(C|x) = \frac{Pr(x|C)Pr(C)}{Pr(x)}}
\end{equation}

where C is the class and x is a doucment represented as a feature vector.

The classification process is simplified by making the assumption that features in the document are unrelated, with each feaure contributing independently to the probability estimate.  The document is assigned to the class for which $\mathbf{Pr(C|x)}$ is greatest.

This relatively simple method has been shown by Lewis \cite{lewis} and others to perform text classification to a high degree of accuarcy and work well with the significant amount of features present in text documents.

\subsubsection*{k-Nearest Neighbour}

k-Nearest Neighbour is one of the simplest statistical learning methods and shown by Yang \cite{yangknn} to perform well for text classification. It is based on the assumption that documents near to each other within the feature space are likely to belong to the same class. As the name implies, knn classifiesa  document by comparing it to its closest neighbours within the feature space, with k denoting the amount comparisons to make.

***INSERT GRAPH showing knn working***

In Figure *** 3 nearest neighbour is performed with the point p being compared to the three points closest to it. With two of them belonging to the negative class, p is thus also classfied as negative.

\section{Topic Categorization}

In order to organise articles into the particular sections it will contain, two forms of analysis will be combined.

\subsection{Link Analysis}

The simplest method of assessing the topic of an article would be using any existing information provided by the source. With most publishers prodiving little or no semantic information, the only universal data that is guaranteed to be provided along with the article content is its link.

The advent of content mamangement systems has brought structured links to news content that can inform as to the articles topic: if the string \textit{'/sport/'} is located within the link, this indicates that the article is located with the sports section of the web site. It must be noted that this is different to searching for \textit{'sport'}. This more naive search would wrongly incorporate into its results articles whose titles contain the word in a different context. An example of this would be the article headlined \textit{ 'New Acer Aspire One Netbooks Sport Beefy Specs, Low Price'}.\footnote{The link to this article is \url{http://www.pcworld.com/article/165045/new_acer_aspire_one_netbooks_sport_beefy_specs_low_price.html}} This method is however very limited and will only work for a very small amount of articles.

\subsection{Content Analysis}

For articles that are not able to be categorzied by analysing their link, text classification shall be carried out on their contents using Support Vector Machines as described in section ***. It is shown to eb one of the best statistical learning methods for text classifcation as it is able to deal with a large amoutn of features and multiple classes.

\section{Location Categorisation}

In order to establish the geographical area most relevent to an article, a combination of techniques will be used to provide a ranking for the results obtained. This is important in order to firstly identify the relevent articles for a given query and secondly, to order the lists of articles according to how revelent they are.

\subsection{Source Analysis}

Through significant anaylsis of many local news stories, it can be noted that an article relevent to a particular location will rarely, if ever, be from a news source based far away from it. Although it is difficult to analyse the exact geographical locations of different news sources, it is fairly trivial to discover what country their are based in. The location categoriser use di this project will therefore skew heavily towards news sources based in the same country as the specified location.

\subsection{Content Analysis}

The most important aspect to deciding an articles location is the content of the article itself. The naive approach, that is taken by Google News, is to consider any article that contains the name of the deisred location, or any of its neighbours, related to that location. This is misleading as this will lead to many false positives\footnote{Articles incorrectly being associated to a location} with articles about Bristol Palin\footnote{Daughter of former US Vice-Presidential Candidate, Sarah Palin} being considered strongly relevant to Bristol, UK. 

Another problem with this method is it failiure to be able to associate any articles which to not contain direct geographical references. An article about an event being held at a venue, such at the Black Swan in Bristol, may not necessarily reference Bristol or any other geographical entity.

To solve  this problem, text classification shall be used, treating location a topic. This will allow for the terms to be associated with Bristol to be chosen by the classifier, using the training data, rather than pre-determined by the user.

\subsection{Location Extraction}

Rather then analysing the whole content of the article, it is also possible to simply extract any locations referenced within it. Entity extraction from unstructred text is an area of computational linguistics which has been very heavily reserached and for which many methods currently exist \cite{entitynews} \cite{entitytext} \cite{entityweb}. Most of these however are quite complex and add a computational overhead which is unnecessary for the given task.

With geographical entities always possessing a capitalised first letter, for the purposes of identiying locations, it is simply possible to compare any capatilased word or phrases against a database of location names, creating a 'bag of locations' similar to the bog of words representation described in section ***.

Once thes have been aquired there are several approaches that can be taken to calculate relevance to a particular location. These methods assume no other prior knowledge for a location other than its name and GSP\footnote{ Global Positioning System.} coordinates.

\subsubsection*{Proportional Referencing}

This method calculates the proportion of references to a given location relative to the total number of location references in the the document. ***Give equation to work this out***

As well as direct references to a particular location, any references to other locations associated with it, such as alternative names, borough or neighbourhoods within it, are also added to ***X***. Subtracted from it are any references to terms that are known to encourage false positives for a location. An example would be the term "Bristol Palin", when analysing the relevence to Bristol, UK.

Though this method does not neccesarily capture the full concept of the article, unlike using Support Vector Machines, it requires no prior knowledge\/footnote{ Would simply mean positvely or negatively associated terms would playing no role in the calculations}. It will therefore be used in tandem with content analysis, ensured a ranking is still provided to articles, even if the program is not trained for that particular location.

\subsubsection*{Distance Analysis}

Another method that using this bag of locations approach, is proposed by Read \cite{datamaps}. It involves using GPS coordiantes to measure the distance between the desired location and all others mentioned within the document. This method does not match the needs of this project as we are focussed on which location is most relevent to the document. This approach will, for example, indicate that a story about a local school raising money for a town in Africa to be signifcantly less revelent to a location then another which entriely dedicateed to dicussing an event taking place in a neighbouring town.

\section{Finding Neighbouring Locations}

Crucial to the functioning of the product, is its ability to spread the scope of its search for articles if it is not able to find a sufficient amount for the target location. This means it does not only have to be aware of the positioning of the target location, but also those of neighbouring locations. 

**INSERT IMAGES OF RADIUS ON GOOGLE MAPS***

In order to carry out this process, the GPS coordinates will be used. The position of the target location will be acquired and then all locations with pre-defined radius will be acquired and articles about them searched for, in order of proximity to the target location. If none are found for this location, then the process is repeated with a greater radius of search, until eventually a sufficient number of articles are acquired.

\subsection{Location Database}

In order to carry out this process in real time, GPS coordinates of locations will have to be acquired quickly and efficiently. Despite the presence of API for both Google Maps\footnote{ Google Maps API, URL: \url{http://code.google.com/apis/maps/} [Last Accessed: 09/03/09]} and Microsoft's Live Maps\footnote{ Microsoft Virtual Earth, URL: \url{http://dev.live.com/VirtualEarth/} [Last Accessed: 09/03/09]}, neither has the functionality to support automatic detection of neighbouring locations.

It is therefore required that a location database be created that the program will use to perform the desired task. This will not only have to allow for quick access to coordinates given the name fo the location, but will also have to be organised by those coordinates in order to quickly acquire a set of neighbouring locations. Claculating the distance between two points will be used as the measure of proximity.

