\chapter{IE evaluation}
This chapter evaluates the processing of IE and NLP on the testing documents. It represents the quantitative metric of the NLP and IE. The qualitative evaluation will be made at the end of the thesis.

\section{Quantitative results}
Proposed search rules has been applied on the provided documents with following results shown in a table\ref{tab:IEResultCounters}. The columns are as follow: document group, sentences, sentences with predicate presented, sentences without predicate, words count, triples count, unique terminology count, overall terminology frequency.

\begin{table}  
  \resizebox{0.8\textwidth}{!}{
	\begin{minipage}{\textwidth}
        \caption{IE and NLP statistics}
        \label{tab:IEResultCounters}
		\begin{tabular}{| c | c | c | c | c | c | c | c |}
			\hline
			Document Group & Sentences & Sent. Pred & Phrases & Words & triples & Un. Termin. & Termin. \\
			\hline
			Management Documentation & 8929 & 4737 & 4192 & 109568 & 6497 & 5624 & 10121\\
			\hline
			Substitution Program & 1138 & 392 & 746 & 12527 & 464 & 870 & 1145\\
			\hline
			Experience records & 12405 & 5738 & 6667 & 137586 & 6042 & 8350 & 14450\\
			\hline
			Expert's profiles & 1157 & 275 & 882 & 9675 & 125 & 734 & 945\\
			\hline
		\end{tabular}
	\end{minipage}}
\end{table}

The ratio between the number of sentences and the number of triples and terminologies is provided in the table\ref{tab:IEResultRatio}. First columns contains the group name, followed by the ratio between all sentences and the number of triples, the ratio between sentences with predicate and triples, the ratio between all sentences and the number of unique terminologies and the last column contains the ratio between all sentences and the sum of all terminologies frequencies.

\begin{table}  
  \resizebox{0.8\textwidth}{!}{
	\begin{minipage}{\textwidth}
        \caption{Triples and terminology statistics}
        \label{tab:IEResultRatio}
		\begin{tabular}{| c | c | c | c | c |}
			\hline
			Document Group & Sentences:Triples & SentPred:Triples & Sentences:UniqueTerm & Sentences:Term\\
			\hline
			Management Documentation & 72.76\% & 137.15\% & 62.98\% & 113.24\%\\
			\hline
			Substitution Program & 40.77\% & 118.37\% & 76.45\% & 100.61\%\\
			\hline
			Experience records & 48.71\% & 105.30\% & 67.31\% & 116.48\%\\
			\hline
			Expert's profiles & 10.80\% & 45.45\% & 63.43\% & 81.67\%\\
			\hline
		\end{tabular}
	\end{minipage}}
\end{table}

The statistics shows that except the Management Documentation there is more sentences without a predicate then with a predicate. Triples from sentences cannot be extracted without a predicate and therefore the opportunity to extract entities with their relations is lost in these sentences. However if a sentence has a predicate, except the Expert's profiles PowerPoint presentation, almost in all sentences were found a full triple. The number over 100\% means that from these sentences were extracted more than 1 triple. Sentence is considered to be full if it has an originator (subject), predicate and the target (object). To determine why there are so many sentences without predicate (phrases) the NLP results has to be examined. The PowerPoint presentations have only short phrases and a lot abbreviations. It shows the fact that full sentences are missing in these presentations and the NLP can not do anything with that. However it IE has managed to get decent ratio (almost 82\%) between the number of sentences and extracted terminologies. Consider this, the terminologies are the most suitable to identify the PowerPoint presentations instead of triples. Management documentation, Substitution program and Experience records are Word and PDF (printed from Word) documents with large continuous textual parts. Even if the ratio between triples and all sentences is decent, documents contains huge amount of sentences without predicate. This might be a problem of the NLP or the full sentences are missing there. Looking into the results of NLP and documents it has been found that it is a mixture of both problems.

\section{Discovered NLP problems} 
The NLP tool Treex uses statistical machine learning. Models used in the Treex were trained on newspaper articles. The dependency trees of the sentences are suppose to be parsed from full sentences, or sentences close to being them. Any missing important part of speech in a sentence or any non grammatically correct sentence will lead into degenerated or incorrect dependency tree. Even if the humans would understand these sentence and could recognize the information in them, machines could not. The only possible solution how to fix it is that the person will write only full sentences and use less abbreviations to increase the efficiency of the NLP.

\subsection{Enumerations}
Another major problem with processing is, when the text is indented by bullets, parenthesis. The NLP can not work with informations stored in another sentence. Following example has been taken from one of the provided documents:

\begin{quote}
\emph{Podkladem pro přiznání a výpočet mimořádné odměny jsou:\\
- vyhodnocené Prezenční listiny nebo Záznamy o školení (např. v případě školicích dnů, profesních školení, apod.),\\
- vyplněné Třídní knihy (např. v případě specifické základní přípravy),\\
- potvrzené Záznamové listy (v případě obecné části stáže),\\
- protokoly z přezkoušení}
\end{quote}

For a human is easy to determine, which parts are related. But Treex is unable to process this structure. It cannot process it as one large sentence, because the text is fragmented into individual sentences. The above example can be rewritten into a model:

\begin{itemize}
	\item \textbf{A:}
	\begin{itemize}		
		\item \textbf{B}
		\item \textbf{C}
		\item \textbf{D}
		\item \textbf{E}
	\end{itemize}
\end{itemize}

The possible solution could be to reorganized the example by using regular expression and create sentences as A B, A C, A D, A E. However the described model is not uniformed across all documents. A does not  contain always a predicate, subject and B,C,D,E only objects. Different strategies for preprocessing and changing the structure of the text based on the founded bullets have been tried. It has resulted in increasing number of full sentences for particular part of documents and decreasing the number of full sentences in others. Without unification of using bullets a general strategy for solving this issue cannot be created.

\subsection{Terms specification}
\begin{quote}
\emph{Lektor – zaměstnanec ČEZ, a. s., který na základě písemného pověření (Zadání lektorské činnosti) vystaveného útvarem RLZ realizuje odbornou problematiku formou teoretického školení určenému okruhu zaměstnanců.}
\end{quote}

This example shows that there is a missing predicate in the sentence, the predicate is replaced with a dash. Treex thinks that the dash is a delimiter and splits this example into two separate sentences. Apart of missing predicate, by splitting the sentence in two the context is lost for the NLP. To fix this problem, the dash would have to be replaced with the right predicate, in this example with verb is. The problem seems unsolvable at the moment due to the fact, that not even the appropriate verb has to be chosen but also the correct inflexion of the verb (considering the Czech language). For the NLP is hard to process very large clauses that are consisted from main and subordinate clauses. The probability of hitting the right dependency tree is getting smaller with the increasing number of possible variations of the clause's dependency trees. The more simple the clause is the bigger chance of getting the right dependency tree exists.
\section{Summary}
If the sentences were more simple, would not contains enumerations and dashes instead of predicate it would results in a higher number of extracted triples and terminologies. Not only higher number of extracted information but the IE could cover larger parts of the documents. Even though the number of extracted entities and relations is good enough.
To prove the proposed usability a set of newspaper articles were processed. The articles were consisted from large continuous texts. The Treex returned 85\% of sentences with predicate and 15\% were phrases. Resulting IE achieved sentence triples ratio over 80\%. This observation implies that having a grammatically correct sentences with continuous simple texts can provide sufficient base for NLP and IE.