\chapter{Implementation of IE}
In this chapter we will describe the implementation of IE in the application. The IE process has a form of a pipeline, where on the input of the first layers is a document and on the output of the last layers are extracted informations. Each layer's process is implemented in exactly one module. The pipeline workflow is physically implemented in the \emph{Controller}, the figure\ref{fig:IEPipeline} represents the logical workflow.

\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{./img/IEPipeline.png}
\caption{Implementation of IE pipeline}
\label{fig:IEPipeline}
\end{figure}

\section{Plain Text Extraction}
The linguistic tool \emph{Treex} can process only documents in a plain text format. We received documents in .doc, .ppt and .pdf format. To be able to process them we need to extract plain text these documents to be able to pass them for the linguistic processing. The text extraction process cannot change the structure of the text to have relevant input for the NLP. This might be difficult, since the documents contain tables, enumerations, references and inner documents. PowerPoint presentation from the structure perspective are most complex. Slides are consisted of objects with text and pictures, object can have more nested textual objects inside. However we need to find a way how to extract the text and then we will deal with the structure. \href{http://poi.apache.org/}{Apache POI} is a \emph{Java} API that provides methods to extract the text from Microsoft Documents. Word documents (HWPF+XWPF) and PowerPoint presentations (HSLF+XSLF). They accept Microsoft Documents created in version before 2003 and also after. \href{http://pdfbox.apache.org/}{Apache PDFBox} is \emph{Java} library for working with PDF documents enabling to extract text from them. We will use these Open-source libraries.

\subsection{Implementation} Figure\ref{fig:TextExtractorD} shows the class diagram of this module. \emph{ExtractTextService} implements \emph{IExtractTextService} interface with the method \emph{extractText(File): String} that takes document file and returns String containing a plain text. Based on the type of the file, the corresponding class using the \emph{Apache POI} or \emph{Apache PDFBox} libraries will be called to extract the text form the documents. When the extraction phase is over, the text is send for post processing to reorder the text to match the original document's structure. The text is then stored into the subfolder where the original document is stored to skip processing of the same file again if needed.

\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{./img/textExtractDiagram.png}
\caption{Text Extraction Diagram}
\label{fig:TextExtractorD}
\end{figure}

\subsection{Observation}
The process extracts all text from the document without any losses, but the structure of the documents sometimes differs in the plain texts. PDF files are extracted without any structural changes. The problem comes with Word and PowerPoing documents. The extracted texts contain additional empty lines. They do not have impact on Treex performance, however if we would like to edit these documents, it does not look right, therefore we had removed these empty lines and restored the suitable structure of the documents. In PowerPoint presentation the structure of the text is changed. The documents in the Experts' Profiles have the same structure in the original document. But the plain text files differs and moreover the structure differs even between the plain text files. The \emph{Apache PDFBox} library creates its own internal structure of the document, it processes each object one by one and appends the text on the output. The different result in each plain text suggest that even if the PowerPoint presentation looks the same, the nested objects do not have the parent objects set identically in each of the presentation causing to have different order of processing. Any attempts for reordering failed due to this random behavior in the results. In the Word documents, the text extraction of paragraphs and texts is working well. During the processing of tables, the hidden table names and columns are shown in the resulting text. We cannot determine if the word is a table name or if it is a part of the text and therefore we have to admit that they will appear in the result. The problem cannot be solved by some general solution. Without broad testing of each document groups and optimizing the extraction process for each particular group individually we cannot get close to the correct results.

\section{Linguistic processing}
In the theoretical part of this thesis we have described the NLP tool Treex in details. In this section we will show how it is implemented in the application. Treex runs only on Linux based systems. The Treex takes as input a plain text file, processes it and writes the result in ConLLX format into an output file. Every time the Treex is invoked, it loads all required modules into operation memory and takes around 3GB of memory. The memory usage is significantly determined by the chosen parser. We could use smaller models, however it will have impact on the precision of processed text. We have chosen \emph{MSTParser}, that has decent precision and memory requirements. Besides high memory usage during processing, it takes almost all CPU time. This causes the Operation system to lag. Considering this we need to allow to run Treex on different server then the rest of the application. The simplest solution will be by invoking Treex service via \emph{Spring RMI}. This module therefore will be split into a client side and a server side.

\subsection{Implementation}
The NLP module is shown on the figure\ref{fig:NLPImpl}. The client is implemented in \emph{LinguisticService} and offers the application to call NLP in method \emph{linguisticPocessing}. The method accepts plain textual file, it reads the text and calls remote method \emph{remoteNlp} and waits for the result to come. Once the result will be available, it will store the file in a subfolder to avoid processing of the same file later again. The server side contains class \emph{RemoteNlp} that implements \emph{IRemoteNlp} interface and provides method \emph{remoteNlp} which receives a text, invokes Treex and reads the result from the output file and returns it back to the client. Invocation of Treex is done directly from the code by calling \emph{Runtime.exec()} method.

\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{./img/TreexImpl.png}
\caption{NLP Diagram}
\label{fig:NLPImpl}
\end{figure}

\section{Observation}
Treex does not allow to have all required modules preloaded in the operation memory. Every time new NLP request comes, it has to load all models from disk to the memory which takes about 40 seconds. After the loading phase is over, the Treex performance is then determined by the size of the text. It takes about 5-10 seconds for 1 Word page based on the text density. In most cases the duration of loading phase exceeds the processing time. For future business use it will be appropriate to have all models preloaded in the memory. This will allow processing of hundreds of texts, otherwise the NLP will be a performance bottleneck of the application.

\section{Document Tree Structure}
The result of the NLP is stored in a textual file. To make searching easier it would be better to load the result into application object and then apply the patterns over its internal structure. We can call it as Data Access Object(DAO). The purpose of this module is to create this DAO object and fill it with the data of the NLP process.

\subsection{Implementation}
Document has a hierarchical tree structure. Root node is the document, its direct childes are paragraphs, paragraphs have sentences and the sentences has words as leafs. Figure \ref{fig:docTree}. 

\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{./img/docTree.png}
\caption{Document Tree Structure}
\label{fig:docTree}
\end{figure}

All the information about the ConLLX Treex output format together with the example\ref{tab:Conllx} can be found in the theoretical section. The idea is, that all linguistic informations are connected only with words. Paragraphs and sentences are used in the internal structure as a reference to ordering of the original document. Information related to the words are:

\begin{itemize}
	\item Sentence has a list of words, index of a word in the list determines its original position in the sentence
	\item Textual representation of the word
	\item Word lemma
	\item Morphological informations such as word type (noun, verb, ...), genus, number, fall, person
	\item Constituent (subject, object, predicate, ...)
	\item Index of a parent word on which the word is dependent. Used to create a virtual dependency tree of a sentence
	\item Information if the word is on the left or on the right side of the parent node
\end{itemize}

The module receives a result, it creates a document root, adds new paragraph on the root, new sentence on the paragraph and starts processing the file line by line. Each non empty line represents NLP result of one word in the sentence. All informations are extracted from the line and filled the word leaf object with the data. If a line is empty it means end of the sentence. The module then creates a new sentence object and adds to the currently active paragraph. When we reach the end of the document, our internal document structure contains all data from NLP of the document and returns it.

\section{IE implementation}
The purpose of this module is to extract informations from the document, based on the search configuration file and the NLP data result. The output of this process are list of extracted triples together with terminologies and their frequencies. Search is invoked from \emph{SearchService} class by method \emph{extractBase(IDocument document, ISearchRules))}.

\subsection{Search rules and search process}
Search is based on finding subtree in the sentence dependency tree that fulfills constraints on word nodes. In the theoretical section we have explained for what we are searching in the document, entity detection and relation extraction. We are searching for triples called subject-predicate-object. A predicate denotes the relation between subject and object. A subject is an originator, in other words it denotes about what a sentence is. An object is the target of a sentence. We have 2 possibilities on how to create and apply patterns. We can create a rule that will contains all tree parts. Each part of the rule (subject, predicate and object part) can have a various types of constraints, to cover all possible patterns we would need to combined each of them resulting in dozens of patterns that will be applied on each sentence. There will be patterns that will differ only in one constraint causing a lot of redundancy in searching. Better approach is to create and apply search rules for each part of the triples individually and combine them on a common predicate. Terminology extraction consists from one part in contrast with triples. We are searching for any noun that with significant meaning determined by the constituent in the sentence. The search rules are:
\begin{itemize}
	\item \textbf{Subject} - Has to be dependent on the predicate directly or its parent node is dependent on the predicate and the parent node is an auxiliary node. Only \emph{Subject} constituent, word type noun.
	\item \textbf{Predicate} - Any predicate constituent in the sentence with an auxiliary verb that develops the predicate.
	\item \textbf{Object} - Has to be dependent on the predicate directly or its parent node is dependent on the predicate and the parent node is an auxiliary node. Allowed constituents are \emph{Object, Complement, Adverbial} and word type is noun.
	\item \textbf{Terminology} - Any noun that has constituent \emph{Subjects, Object, Adverbial or Complement}. Any attribute directly dependent on this node is appended in the result
\end{itemize}

With the search rules being set up, advance to the search itself. For triples:
\begin{enumerate}
	\item For each sentence repeat:
	\begin{enumerate}
		\item Apply rules and find all predicates, subjects and objects
		\item Filter the duplicate matches
		\item For each subject find which predicate in the sentence is dependent and create a pair with the predicate
		\item For each object find which predicate in the sentence is dependent and create a pair with the predicate
		\item For each subject and object pair create a triple if they are dependent on the same predicate
		\item Add the triples found in the sentence into the list of document's triples
	\end{enumerate}
	\item Return result
\end{enumerate}

To avoid loosing informations from the sentence when the object is missing, we will use the partial result as double with an empty object. Terminology search algorithm is similar to the triples search:
\begin{enumerate}
	\item For each sentence, apply the rules
	\begin{enumerate}
		\item If a match was found, try to find any child nodes that are developing the match and are allowed by the rules.
		\item Reorder the words based on the position in the sentence and remove duplicate matches
	\end{enumerate}
	\item Group the terminology matches based on their lemma
	\item For each group create the result match that has common lemma, unique textual representations and size of the group as frequency of the term in the document.
\end{enumerate}

Remove duplicate matches in the search algorithm means, that if on rule is also a subpart of another, then if the search finds the larger one, than it had to found the subpart and we have the duplicate matches. After all matches has been found, the algorithm goes trough the list of matches and removes the duplicate sub matches. Both, list of triples and list of terminologies are added into \emph{ExtractedKnowledge} object and passed back to the \emph{Controller} as a result.

\subsection{Writing search configuration}
The search configurations are written in XML document. When IE search module is invoked configuration is loaded into memory and it extract the rules for subject, predicate, object part of the triples and for terminologies. The default configuration is shown in the Appendix C\ref{App:Search}. We will explain the parts of the search configuration:
\begin{itemize}
	\item \emph{ruleset} - contains set of rules for each search part. The \emph{type} attribute denotes to which set it belongs. Possible values are \emph{predicate, subject, object or terminology}. Ruleset node contains individual rules.
	\item \emph{rule} - represents one particular rule and it contains one or more nodes.
	\item \emph{node} - It represents a restriction given on the node (word) in the sentence dependency tree. Each child tag creates a restriction on constituent, parent word constituent and word type of the node.
	\item \emph{constituent} - restriction on the constituent of the node in uppercase
	\item \emph{vassal} - marks the node as a child of another node. Values true or false
	\item \emph{parentType} - restriction on parent node constituent in uppercase or null if the node is root node of the rule
	\item \emph{wordType} - restriction on the word type of the constituent in uppercase null if the restriction is not needed
\end{itemize}

User can alter or create a new rules and apply them on the IE from the documents.