% Chapter 2
\chapter{Background}
\label{Chapter2}
\lhead{Chapter 2. \emph{Background}}

This chapter describes the major concepts required to understand the research and implementation carried out to produce the local newspaper generator.

\section{News Aggregation} \label{2aggr}
News aggregation services, as discussed in the previous chapter, are crucial in providing users access to the vast amounts of information dispersed around the many different news sources available on the internet. They provide two key services that simplify the process of consuming online news:

\begin{itemize}
\item \textbf{Automated article collection:} Using web crawlers and other methods, they monitor chosen news sources for any new articles. Once discovered, the articles are indexed and their content extracted for later use.

\item \textbf{Content analysis:} Having collected these articles and extracted the content, aggregators will analyse it in order to identity the relevance of the content to users. There are two ways of performing this task and, sometimes, a combination of the two will be employed:

\begin{enumerate}
\item \textbf{Manually:} Editors will read each article and define its topic and importance to the target audience, much as in a traditional newspaper. This method ensures the highest degree of accuracy and the best results for the users. It suffers however from the time and manpower requisites which make it hard to scale.

\item \textbf{Automatically:} Many aggregators will use algorithms to automate this process. Although reducing the quality of service, the scalability of this method allows for faster and more personalised services.
\end{enumerate}
\end{itemize}

\section{RSS Feeds} \label{2rss}

RSS (Really Simple Syndication) feeds are a widely used format which allows web sites to automatically inform users of any updates to their content. These feeds are delivered in an XML format which, when used by news sources, provides links to new articles, full or summarised versions of their content, and any other metadata the publisher wishes to add.

It is of major benefit to web publishers, as it syndicates their content automatically and informs users of new information available to them. The benefit to users is the ability to be informed immediately of any changes to web sites, without the need to monitor them manually.

The popularity of this technology, especially amongst online media sources, provides news aggregators with a significantly easier method of collecting articles: they are guaranteed to be aware of any new articles added by a source, without relying on web crawlers that can easily miss web pages they do not find links towards. Many news sources also provide RSS feeds dedicated to particular topics or issues, simplifying the aggregator's catergorisation process as well.

\section{Tree Representation of Web Pages} \label{2dom}

In order for a program to easily access and manipulate web pages, it needs to be able to represent their contents in a logically structured form. Due to the hierarchical nature of (X)HTML, it is easily represented as a tree.

Using the simple web page outlined in Figure \ref{fig:html}, we can take $<$html$>$ to be the root node and any tags nested within another to be child nodes, producing the tree in Figure \ref{fig:tree1}. This can in fact be defined as an ordered tree, with nodes further to the left appearing further up in the web page.

\begin{figure}[h]
  \centering
  \subfigure[Example of Basic Web Page]{
  \includegraphics[width=2.7in]{html.pdf}
  \label{fig:html}
  }            
  \subfigure[HTML Tree Example]{
  \includegraphics[width=2.7in]{HtmlTreeEx.pdf}
  \label{fig:tree1}
  }
  \caption{Example of an HTML document represented using a tree}
  \label{fig:htmltree}
\end{figure}


%\begin{figure}[h]
%\begin{center}
%\includegraphics[width=3in]{./Figures/html.pdf}
%\end{center}
%\caption[Example of Basic Web Page]{Example HTML code representing a basic web page}
%\label{fig:html}
%\end{figure}
%
%\begin{figure}[h]
%\begin{center}
%\includegraphics[width=3in]{./Figures/HtmlTreeEx.pdf}
%\end{center}
%\caption[HTML Tree Example]{HTML document represented using a tree}
%\label{fig:tree}
%\end{figure}

This structure is used by the Document Object Model which allows easy representation and manipulation of HTML, XHTML and XML documents. Many projects, such as \cite{domtree} and \cite{treeedit}, use this model to extract or mainpulate web content. 

\section{Web Content Extraction}
When aggregating online news, it is important to be able to acquire the full text of an article from the web page it is contained in. Although some news providers, such as the Guardian\footnote{The Guardian, http://feeds.guardian.co.uk/theguardian/rss [Last Accessed 16/05/09]} and Reuters\footnote{Feeds acquired via the Reuters Spotlight API, http://spotlight.reuters.com/ [Last Accessed 16/05/09]}, provide full article text within their RSS feeds, most tend to only include a summary. Anyone interested in an article must therefore visit the site and be exposed to the rest of the publishers content and, perhaps most crucially, their advertising.

This creates a problem for aggregators wishing to perform any automatic analysis as the content of the article must be extracted from the rest of the information included in the relevant web page. This is a non-trivial task as there is no generic structure to these web pages which is constant across different sources. This means a program will struggle to differentiate between the full article text and other redundant information contained within the page. The various existing methods tackling this issue are examined in Section \ref{3ace}.

\section{Text Classification} \label{1textclass}

Text classification is the process of automatically categorising structured or unstructured information, according to a set of pre-defined criteria. This enables documents to be grouped by concept and will be used in this project to categorise news articles by topic and location (as described in Section \ref{3topic}). This process can also perform sentimental or subjective analysis of text, such as differentiating between positive and negative film reviews.
Although many different methods of carrying out this process currently exist, with many explained in more detail in Section \ref{2tc}, they mostly revolve around two distinct approaches:
\begin{itemize}
\item \textbf{Knowledge Engineering:} These methods involve establishing logic based rules that are heavily dependent on the type of classification being performed. A rule can be as simple as \emph{If 'NYSE' in the Headline or lead paragraph, the subject is 'Stock Market'} or several pages of complex Boolean logic. These rules are often manually written for specific scenarios, though some methods, such as those using decision trees proposed by Apte et al \cite{decision}, have automated this process. 

\item \textbf{Statistical Categorisation:} These methods use machine learning techniques that compare unknown documents to pre-classified ones, assigning a class based on similarity to the training data. It is therefore essential for accurate classification that the training data provided be sufficiently representative of each category.

In order to use these statistical methods, text-based documents must be converted into numerical feature vectors. This can be done in a variety of ways, which will be discussed in section \ref{3dr}.
\end{itemize}
