%
%
% Section for implementation
%
\section{Implementation}
\label{sec:Implementation}
The implementation of ChronoSearch is divided into the components shown in Figure \ref{fig:dataflow}. The logical flow of data through ChronoSearch starts with the input web pages and entity. The input information is then fed synchronously through a series of data processing steps. The data processing steps perform data preparation, extraction, and refinement in roughly that order. The following sections describe each step in more detail.

\begin{figure}
\begin{center}
\includegraphics[height=3.5in]{system_overview.jpg}
\caption{ChronoSearch Data Flow}
\label{fig:dataflow}
\end{center}
\end{figure}

\subsection{Text Extraction}
The input to ChronoSearch includes web pages of any format related to the person entity provided. Web pages often contain scripts, links, document object model (DOM) formatting, and vary in structure. In order to process the web pages, the first task is to extract textual information from the web pages. Since steps later in the pipeline utilize natural language processing tools, it is imperative that the output data from this step is as close to properly formatted written text as possible. Without some level of grammatical correctness, natural language processors suffer due to their reliance on grammar tokens to make further assumptions about the text.

To extract textual content from web pages, the Beautiful Soup HTML parser \cite{Richardson:2011:Online} is utilized to extract all HTML paragraph,‘$<$p$>$’, elements from the web pages. Paragraph elements are chosen because they are typically used to encapsulate textual elements and they also provide a contextual boundary. Since each of the paragraph elements are concatenated together, this contextual boundary allows us to assure a sentence does not span multiple paragraphs and that poorly formatted text in one paragraph will not affect the textual processing in the next paragraph.  
  
\subsection{Sentence Tokenization}
After extracting textual content from a web page, the next step in the pipeline is to tokenize the content into sentences so that the event extractor can extract event descriptions in the form of sentences. To tokenize the textual input into sentences, the natural language toolkit (NLTK) \cite{bird2009natural}  is utilized. The sentence tokenizer performs well on grammatically correct text, but can also output non-sentences when it is fed data it is incapable of handling. Due to this behavior, the output of the NLTK sentence tokenizer is further sanitized by removing sentences that do not conform to average sentence word length distributions as described in the design section. 

\subsection{Event Extraction}
After the textual information from the input web pages have been extracted and tokenized into sentences, the next step is to extract event descriptions. As mentioned in the preceding design section, an event description is a sentence containing both the input entity and a date. To extract these sentences, two sets of regular expressions were generated; one set of expressions for the dates and one set of expressions for the entity. If a sentence matches at least one expression from both sets, it is added to the results set.

\subsection{Duplicate Removal}
Once the set of results are generated, our duplicate removal process is invoked to remove duplicate event descriptions. As mentioned in the design section, duplicate event descriptions are removed based on their similarity using a cosine similarity test and a verb similarity test. 

The cosine similarity test is implemented using sparse vectors. First, an array of all the words present in all of the event descriptions is generated. To improve the cosine similarity test, only word stems are considered. Stemming reduces all words with the same root to a common form by stripping each word of its derivational and inflectional suffices. Stemming improves similarity tests because it maps multiple versions of a word with a single meaning to one word \cite{lovins1968development}. Each sentence is then converted to a sparse vector V, where each element in V represents the number of times a stem from the global array is present in the sentence. The cosine function is then used to calculate the distance between each vector. Because event description duplicates are mostly found within the same year and because the computation performance of the described algorithm is 0($n^2$), descriptions are only compared to descriptions within the same year.

The verb similarity test is implemented by extracting the set of verbs from each event description. These sets of verbs are then compared to determine duplicate event descriptions. For the same performance reasons stated above, the verb similarity test is only applied to event descriptions that occur on the same day. 

In both the cosine and verb similarity tests, similarity scores greater than .5 results in one of the event descriptions being removed. The event description selected for removal is chosen arbitrarily. 
