\documentclass[10pt,a4paper]{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}

\usepackage[pdftex]{graphicx}
\usepackage[english]{babel}
\usepackage{url}
\selectlanguage{english}

\title{NetOracle: a semantic question answering system}

\author{Menno den Hollander \and Ricky Lindeman}

\pagestyle{plain}
\begin{document}
\maketitle
\thispagestyle{plain}

\begin{abstract}
This report discusses the development of a question answering system that extracts semantic information from the world wide web to produce an answer.
\end{abstract}

\section{Introduction}\label{section:introduction}
Speech and language processing could one day offer machines human like communication capabilities. However there still remains a lot to be done.

In this report we discuss our findings of building a completely functional question answering system. Its job is to answer questions using facts derived from the internet.
Moreover, this system provides a platform that can be used to kick start research in multi modal dialogue management, speech recognition, natural language processing, emotion recognition and HCI research. The developed question answering system can also be used for educational purposes, since it allows the user to observe its inner workings. Finally, the system can be used for marketing purposes to attract new students.

We first discuss the requirements of the question answering system in section 2. Then we will discuss the most important design decisions in section 3. Due to technical issues and time constraints we were unable to completely implement all the requirements. Hence, in section 4 the current state of the implemented question answering system is discussed. %Section 5 provides the question answering system is evaluated. 
Finally, in the last section we conclude our findings and discuss some enhancements that should improve the performance of the system.

\section{Requirements}\label{section:requirements}
In this section we give an overview of the requirements of our question answering system. These requirements are of such a high level that they cannot be considered to be a full specification of our system and only serve the purpose of giving an idea of the main features of our question answering system.

The requirements are divided in requirements for the dialogue system, which is responsible for the natural language interface with the user; requirements for the answer computation system, which his responsible for interpreting the question and calculating an answer; and the graphical user interface, which provides access to the dialogue system and provides access to information about the inner workings of the question answering system.

The goal of this project is to develop a question answering system that is capable of answering question using information found on the internet. The focus is on question interpretation, information retrieval and fact deduction. Hence, we have kept the dialogue manager simple. Since the intended use of our prototype is purely educational, the graphical user interface should provide detailed information about the inner workings of the question answering system.

\subsection{Dialogue system requirements}

\begin{enumerate}
\item Dialog control is system-led; the system and the user take turns in communicating, i.e. the user asks a question, then the system answers.
\item Dialog system has no state. So questions cannot refer to previous questions and answers.
\item The user provides textual input in the form of the English (natural) language.
\item The user provides textual output in the form of the English (natural) language. It only returns: ``yes'', ``no'' or simply a word. For example, when the system is asked: ``Who is Tina Turner?", the system replies: "a singer".
\end{enumerate}

\subsection{Answer computation system requirements}
\begin{enumerate}
\item The answer computation system is only capable of answering factual questions under the condition that the factual answer can easily be found on the internet by a human. The question answering system does not support features such as chit-chatting and answering questions that require logical deduction, such as mathematical questions.
\item The system interacts with a web search engine to locate webpages that are likely to contain the answer to the question.
\item The system fetches these relevant webpages, extracts the content and strips irrelevant text from the content, such as navigation bars and advertisements.
\item The system employs a punkt tokenizer to divide the content in sentences.
\item The system employs a part-of-speech tagger to tag each word with its word category. 
\item The system uses the derived tags to divide the sentence in chunks to derive the syntactical structure of each sentence.
\item Each syntactical structure is mapped to a logical expression in first order logic that represents the semantic meaning of the sentence.
\item The first order logic sentences are then passed to a theorem prover that computes an answer.
\item The computed result is then transformed to text to describe the answer in plain English.
\end{enumerate}

\subsection{Graphical user interface requirements}
\begin{enumerate}
\item The graphical user interface provides the user with an intuitive interface to the question answering system.
\item The graphical user interface provides information about the inner workings of the question answering system to the user. The following information is made available for each webpage analyzed:
\begin{enumerate}
\item A summary of the search engine results.
\item The webpage text after filtering HTML tags and script code.
\item The webpage text including the results of the relevance filter, e.g. navigation bar text or banner text should be considered irrelevant.
\item The results of the punkt tokenizer splitting the relevant text into sentences.
\item The results of the part-of-speech tagger and chunker for each sentence. A sentence is split into multiple chunks using the part-of-speech tags designated to each word in the sentence.
\item The semantic (first order logic) representation of the sentences and questions.
\item The output of the employed theorem prover and the model checker.
\end{enumerate}
\end{enumerate}

\begin{figure}
\begin{center}
\includegraphics[scale=1]{conceptualdesign.pdf}
\caption{Conceptual design of the question answering system}
\label{figure:conceptualdesign}
\end{center}
\end{figure}

\section{Design}\label{section:design}
This section describes the design of the question answering system. Figure \ref{figure:conceptualdesign} provides an overview of the different stages of the question answering process. The dialogue system is responsible for collecting a question from the user and displaying the answer. The question is first send to the search engine component that interfaces with the Yahoo search engine. The result is a list of relevant webpages, which are processed then by the webpage preprocessor to extract the relevant text. The part-of-speech tagger is then responsible for determining the word class of each word, whether it is a noun, verb, adjective, preposition, pronoun, adverb or something else. In the next stage the sentence chunker groups the words, using their word class, in chunks that corresponds to a constituent, for example a noun group, a verb or a verb group. The semantic analyzer transforms these chunks into first-order logic sentences, which are then fed to a theorem prover and a model checker to compute an answer to the question. Finally, in the last stage the answer computed by the theorem prover or model checker is mapped to an answer given in the English natural language.

The webpage preprocessor, the part-of-speech tagger, the sentence chunker, the semantic analyzer, the theorem prover and model checker are discussed in more detail in the following subsections.

\subsection{Webpage Preprocessor}
The webpage preprocessor is responsible for extracting the relevant text from a HTML webpage. It first removes HTML tags, comments and script elements from the HTML page. The result is a text file that only contains plain text. However, not all of this text is relevant. For example, most of the time the navigation bars, header and footer of a website do not contain any useful information and primarily exists to navigate the website. This text is removed to improve the overall performance of the system by avoiding unnecessary calculations in the following processing steps. The irrelevant text is filtered by calculating the text-to-tag ratio and removing text elements that have a high tag density ~\cite{texttotagratio}, e.g. a navigation bar has many hyper links, hence the tag density is high for the associated text. More specifically, the text-to-tag ratio is determined by counting the number of text characters on a line and dividing it by number of tags on that line. This ratio is then smoothed by taking the average ratio of the lines in the neighborhood. Finally, the text lines that have a lower ratio than the threshold are discarded. This threshold is the average mean minus the standard deviation of all the computed text-to-tag ratios of the text. Figure \ref{figure:filteredgraph} shows the raw and smoothed text-to-tag ratios and the threshold value for the Wikipedia page of Tina Turner.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{4_filtered_graph.png}
\caption{Text-to-tag ration graph for Tina Turner Wikipedia page. The vertial axis represents the text-to-tag ratio, whereas the horizontal axis represents the line number of the fetched internet page. The red dashed line represents raw text-to-tag ratio. The blue line represents the smoothed text-to-tag ratio and the flat yellow line represents the threshold. Everything below this threshold is discarded.}
\label{figure:filteredgraph}
\end{center}
\end{figure} 

\subsection{Part-of-Speech Tagger \& Sentence Chunker}
\label{sec:tagger and chunker}
The part-of-speech tagger and the sentence chunker are responsible for determining the word class of each word and dividing sentences in constituents. The part-of-speech tagger is a bigram tagger that is trained using the Brown corpus~\cite{francis1967computational} to tag the words. This bigram tagger backs of to a unigram tagger when the bi-gram tagger is unable to compute an answer. When the unigram tagger fails, the word is tagged by default as a noun. The decision to use a bigram tagger is based on the assumption the text pages fetched from the internet are diverse in grammar and word style. We expect that a trigram tagger would not be beneficial, because it is more context specific, meaning that we would have to select a training set that represents these text more accurately.

The sentence chunker employed uses a similar scheme as the part-of-speech tagger since it is a bigram chunker that backs off to a unigram chunker, and is trained using the CoNLL 2000 corpus~\cite{conllcorpus}. Instead of grouping the chunks recursively to create a full parse tree of a sentence, the chunks are directly passed to the semantic analyzer as a flat block structure to simplify its implementation.\footnote{This simplifies implementation of the semantic analyzer, because the entries in VerbNet that these chunks must be matched against are also stored as a flat block structure.} An example of a sentence with a recursive chunk structure would be:

\begin{quote}
``The girl saw that the man said something to the boy that went to the elephant.''
\end{quote}

Below we show the two different ways to group the chunks. In our case, this sentence is passed as a list of chunks (a flat parse tree) to the semantic analyzer.\footnote{Note that this is an actual sentence that we tested.}

\begin{tabular}{p{4.5cm}  p{7cm}}
& \\
Flat Parse Tree: & Full Parse Tree: \\ 
\footnotesize  
\begin{verbatim}
(S
  (NP The/AT girl/NN)
  (VP saw/VBD)
  (NP that/CS the/AT man/NN)
  (VP said/VBD)
  (NP something/PN)
  (VP to/TO)
  (NP the/AT boy/NN)
  (NP that/CS)
  (VP went/VBD to/TO)
  (NP the/AT elephant/NN))
\end{verbatim}
&
\footnotesize  
\begin{verbatim}
(S
  (NP (Det the) (N girl))
  (VP
    (V saw)
    (CONJ that)
    (S
      (NP (Det the) (N man))
      (VP
        (VP (V said) (NP something))
        (PP
          (P to)
          (NP
            (NP (Det the) (N boy))
            (S
              (NP that)
              (VP
                (VP (V went))
                (PP (P to) (NP (Det the) (N elephant)))))))))))
\end{verbatim}
\\ 
%\hline 
\end{tabular} 

%\subsection{Part Of Speech Tagger}
%\subsection{Sentence Chunker}
\subsection{Semantic Analyzer}
\label{sec:Semantic Analyzer}
The Semantic Analyzer component transforms the tagged chunks and words in first-order logic sentences that represent the semantic meaning of the text. This component uses the VerbNet~\cite{schuler2005verbnet} database which contains frames with syntactic descriptions and semantic predicates that describe verbs. The tagged chunks and words are first matched to the syntactic description of a verb. When a matching syntactic description is found, the words in the chunks are combined with the associated semantic predicate, to obtain the semantic meaning of the chunks.

\subsection{Theorem Prover \& Model Checker}
\label{sec:theorem and model}
We run the Prover9 theorem prover and Mace4 model checker~\cite{prover9mace4} in parallel to compute an answer. These tools only report whether the model is satisfiable, unsatisfiable or when a time-out has occurred. For closed questions, these answers corresponds respectively to answering ``yes'', ``'no'', and ``unknown''. The theorem prover and model checker are invoked multiple times to solve opens questions such as: ``Who is Tina Turner?'' For each invocation a different logic constant is used. For example, if we have the constants: ``Bob'', ``a singer'', and ``Alice'' the invocations would be: ``Is Tina Turner Bob?'', ``Is Tina Turner a singer?'' and ``Is Tina Turner Alice?''. Only ``Is Tina Turner a singer?'' would be satisfiable, hence the answer to this open question would be ``a singer''.

\section{State of the Prototype Implementation}
This section discusses the implementation of the prototype question answering system and third party components that were used. Due to the ambitious nature of this project, technical issues (software defects in third party components) and time constraints we were unable to completely implement all the features of the proposed question answering system. We have implemented everything of the design (see figure \ref{figure:conceptualdesign}) up to the semantic analyzer component, and the graphical user interface provides detailed information for each of these components. The specifics of the implementation of each component are briefly discussed below.

\subsection{Dialogue System}
Figure \ref{figure:chat} shows the user interface of the dialogue system, which allows to user to communicate with the question answering system.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{1_chat.png}
\caption{Dialogue system interface}
\label{figure:chat}
\end{center}
\end{figure} 

\subsection{Search Engine}
The prototype uses the Yahoo! Web Search service\footnote{\url{http://developer.yahoo.com/search/web/V1/webSearch.html}} to obtain a list of websites that are likely to contain the answer. Figure \ref{figure:webpageinfo} shows the information that the search engine provides to the question answering system.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{2_webpage_info.png}
\caption{Search engine result summary}
\label{figure:webpageinfo}
\end{center}
\end{figure} 

\subsection{Webpage Preprocessor}
Since a significant number of webpages is malformed, i.e. those that do not adhere to the HTML specifications, we use Beautiful Soup\footnote{\url{http://www.crummy.com/software/BeautifulSoup/}} to turn these into well-formed webpages. The webpage preprocessor then extracts the relevant text using the text-to-tag ratio filtering technique, which we implemented ourselves. Figures \ref{figure:filteredgraph} and \ref{figure:filteredcontent} illustrate what kind of text is filtered for the Tina Turner Wikipedia page.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{3_filtered_content.png}
\caption{Filtered content}
\label{figure:filteredcontent}
\end{center}
\end{figure} 

\subsection{Integrating the Part-of-Speech Tagger and the Chunker}
We used the part-of-speech tagger and the chunker components provided by the NLTK framework to analyze the structure of a sentence. The inner workings of these components for the Tina Turner example is shown in figure \ref{figure:parsed_chunks}. The NLTK, Natural Language Toolkit, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python programming language.\footnote{See http://www.nltk.org for more details.}

Although both components are provided by the NLTK framework, they use different representations for tag types. This is due to the underlying corpora used, the tagger was trained on the Brown corpus and the chunker was trained on the CoNLL 2000 corpus. We had to introduce an additional intermediate step to map the Brown tag types to CoNLL 2000 tag types.\footnote{The Brown corpus has approximately 450 compound tag types, whereas the CoNNL 2000 corpus has only 44.} For example, the Brown corpus had special tags for designating instances of the verbs ``to be'' and ``to have''. Whereas CoNLL 2000 tags all verbs with the same tag type. This mapping is rather complex, since even general tag types, such as determiners (a, the), have different representations.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{5_parsed_chunks.png}
\caption{Part-of-speech tagging and sentence chunking}
\label{figure:parsed_chunks}
\end{center}
\end{figure} 

\subsection{Semantic Analyzer}
Probably the hardest step in developing this system is to derive an accurate and correct semantic model from the text.

First, we tried to use the propositional logic package of NLTK. Relationships between entities would be represented in a model using the set theory. Every entity that could be deducted from a parsed sentence would have to exist in the domain. Using propositional logic facts could be deduced from the model. However set theory does not allow to add higher order rules such as ``all monkeys are animals''. Every entity that was in the monkey subdomain also had to be manually added to the animal subdomain. To extend the NLTK package such that it would automatically apply these rules was a non-trivial task. Moreover, this approach would have probably only worked in extremely simple cases. Hence, we decided to use a more expressive and efficient way to store the semantics of the text.

We chose first-order logic, since it provides sufficient expressiveness to represent most features of natural languages. For example, the sentence ``all monkeys are animals'' is represented by the formula: ``$\forall x.$ monkey$(x) \rightarrow $ animal$(x)$'' Furthermore, first-order logic problems can be solved efficiently using highly optimized theorem provers. We chose to use the Prover9 theorem prover and Mace4 model checker, since NLTK provides an interface for both of them. To effectively deal with grammatical tenses, which express the time at, during, or over which an action described by a verb occurs, we decided to use Davidson style event semantics\cite{davidson2001essays}. This enables us to capture logic expressions with temporal constraints and a variable number of event arguments.

Instead of generating a complete grammar that covers as much words as possible, we decided to implement a system that would create grammar rules whenever a word was not covered in the basic grammar. The basic feature-based grammar consists of a set rules to cover the basic English grammar constructs and a few basic English words. The features were used to gather extra information about the words in the sentence (e.g.: a word can be plural or singular), but also to generate logic expressions using partial lambda expressions defined in every grammar rule. Logic expressions describing a complete sentence are constructed in a bottom up fashion.
Whenever a word was not covered by the grammar our system would add extra grammar rules. In case a noun was missing it could be added to the grammar with little effort. However when an unknown verb was encountered more steps had to be taken. First the verb had to be looked up in VerbNet\footnote{NLTK provides interfaces for VerbNet as well as WordNet.}. Then the correct syntactic frame had to be determined by using the chunk information of a parsed sentence. After that the semantic representation of that frame had to be converted to a Davidson style event lambda expression. Finally, the new grammar rule is added to the grammar and the sentence is parsed again by the grammar. In cases where the sentence contains more than one unknown word, the above described process would be repeated until all unknown words are covered by the grammar. When an unknown word is not a noun or a verb we need to distinguish two types of word classes: open word classes, such as adverbs and adjectives; and closed word classes, such as prepositions, determiners, conjunctions and pronouns. Generating grammar rules for words belonging to the open word classes should happen in the same way as verbs, using WordNet and the part-of-speech tag. Rules for all the words in the closed word classes could easily be added to the grammar beforehand.

Currently our system only supports the lookup and generation of rules for unknown words while the grammar only covers a small set of words belonging to the closed word classes. The algorithm for generating new grammar rules for verbs fails occasionally, because the generation of the partial lambda expressions from the VerbNet semantic rules is very experimental and based on a small set of verbs and their semantic representation. In order to ensure the correctness of this algorithm every semantic representation of each verb should be checked. Unfortunately, this approach would consume too much time considering this is only a student assignment. In the future, a more efficient mapping from VerbNet entries to lambda expressions should be investigated.


\section{Evaluation}
In this section the current state of the prototype, as mentioned in the previous section, is evaluated using two types of tests: A functionality test and a performance test. The functionality tests should give a proper indication of what types of sentences and questions the system is capable of understanding or not. The performance tests should give us insight about the performance of the system, which components are the bottleneck and an indication of the time complexity with respect to the number of sentences used in the input text.

The semantic analyzer is split into two components for timing purposes: the grammar generation components and the actual sentence parser that uses the generated grammar. In order to get useful results some components are bypassed, such as the web search and web processor, and the grammar generation component in the semantic analyzer. The web search and the web processor component are not used in both test types because a fixed input text is used without HTML-tags. The grammar generation component of the semantic analyzer is skipped because all input sentences can be parsed by the grammar used in both tests, to insure that the tests will not give faulty results because of the partial implementation of the grammar generation algorithm.

\subsection{Functionality Evaluation}
\label{Functionality Evaluation}
Several tests were executed to determine if the question answering system gives a correct answer given a text and a question. Table \ref{table:answer evaluation} shows the outcome of these tests, each row shows the question and the answer given by the system.\footnote{Note that the grammar has some problems parsing a sentence with the verb ``to be'', therefore some additional logic expressions specifying common facts will be added to the input of the theorem prover and model checker. Some example facts would be ``Every man is a person'' and ``Every woman is a person''. In the future the system could either be bootstrapped with general world knowledge selected by a user, or the system could deduce such facts from WordNet.}

As can be seen in table \ref{table:answer evaluation}, the questions are not real questions but are instead assumptions, because the theorem prover and model checker will only give true or false as an answer. If the result is true it means that a proof was found (and no counter model was found) and the assumption could be deducted from the input sentences, if the result is false it means that a counter model has been found (and no proof exists) and the assumption conflicts with the input sentences.

\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline \textbf{Question} & \textbf{Answer} \\ 
\hline Suzie loves Bob & true \\ 
\hline Suzie loves Steven & true \\ 
\hline Suzie loves Kim & false \\ 
\hline Suzie loves Mary & true \\ 
\hline Suzie loves every woman & false \\ 
\hline Suzie loves a dog & false \\ 
\hline Somebody loves Bob & true \\ 
\hline A dog barks & true \\ 
\hline Somebody chases a dog & true \\ 
\hline Bob chases a dog & true \\ 
\hline Bob chases a person & false \\ 
\hline 
\end{tabular}
\caption{Evaluation results for the question answering system}
\label{table:answer evaluation}
\end{center}
\end{table}

The four following sentences were used as input:
\begin{verbatim}
Suzie loves all men
Suzie loves Mary
Fido barks
Bob chases Fido
\end{verbatim}

Together with the following additional logic expressions:
\begin{verbatim}
man(Bob)      man(Joe)
man(Steven)   woman(Suzie)
man(John)     woman(Mary)
man(Vincent)  woman(Kim)
dog(Fido)
all x. (boy(x) -> male(x))
all x. (man(x) -> person(x))
all x. (girl(x) -> female(x))
all x. (woman(x) -> person(x))
\end{verbatim}

\subsection{Performance Evaluation}
\label{Performance Evaluation}
In order to determine the bottleneck of the system several tests were conducted and the total time as well as the time per component needed to compute an answer were measured. The number of input lines was increased in each test in order to make feasible assumptions about the time complexity. Each input line is a sentence with only two or three words: a noun followed by an intransitive verb or a noun followed by a transitive verb followed by a noun. Statistical variance in the timing was eliminated by repeating each test twenty times.

\begin{figure}
\begin{center}
\includegraphics[scale=1]{total_runtime.png}
\caption{The total runtime needed to compute an answer, together with the runtime for the parse part and the proof part.}
\label{figure:total_runtime}
\end{center}
\end{figure} 

Figure \ref{figure:total_runtime} shows the total runtime to compute an answer, furthermore the total runtime is split up in the time needed for the parse part and the proof part. The parse part consists of the tagger \& chunker component, the grammar generation component and the sentence parser. The proof part consists of the theorem prover and the model checker component. The graph shows that the parse part takes the most time to complete and that both the parse part and the proof part run in linear time (with respect to the number of sentences).

It is questionable if the time measured for the proof part would drastically increase (and would have an exponential time complexity) if more complex sentences (resulting in more complex logic expressions) are used; Prover9 and Mace4 are highly optimized solvers written in a native language. Although theorem proving in first order logic is semi-decidable, both applications are run in parallel, with three possible outcomes: If Prover9 finds a proof the goal expression can be inferred, if Mace4 finds a counter model the goal expression does not hold and if both applications time out, the problem is undecidable.

\begin{figure}
\begin{center}
\includegraphics[scale=1]{parse_runtime.png}
\caption{The total time needed to compute an answer is divided by the number of sentences in the input file}
\label{figure:parse_runtime}
\end{center}
\end{figure} 

Figure \ref{figure:parse_runtime} shows a breakdown of the time per component in the parse part, which shows that the sentence parser component is the real bottleneck of the application. Increasing the complexity of the input sentences would require a more complex grammar with more grammar rules, resulting in an even greater runtime.
The time needed by the grammar generation component per sentence is constant, because all verbs are known in the grammar (therefore it is actually not needed). If the grammar generation component was fully implemented, its time complexity would certainly depend on the number of words in the sentence.

\section{Final Words}\label{section:conclusion}
We discussed the development of NetOracle, which is a question answering system that extracts semantic information from the world wide web to produce an answer. The code of this system is made available at \url{http://code.google.com/p/netoracle/}. While the system is not completely implemented at this moment, it can still be used for educational purposes to demonstrate many language processing techniques, ranging from text extraction to syntactic analysis. In the future, the development of the system could be completed to be able to actually answer questions. Moreover, when support for extra knowledge sources like Wordnet is added, it should significantly improve the chances of answering questions correctly.

\bibliographystyle{plain}
\bibliography{netoracle}

\end{document}

