\chapter{IE evaluation}
This chapter evaluates the processing of IE and NLP on the testing documents. It represents the quantitative metric of the NLP and IE. The qualitative evaluation will be made at the end of the thesis.

\section{Results of the thesis}
Proposed search rules has been applied on the provided documents with following results shown in a table\ref{tab:IEResultCounters}. The columns are as follow: document group, sentences, sentences with predicate presented, sentences without predicate, words count, triples count, unique terminology count, overall terminology frequency.

\begin{table}  
  \resizebox{0.8\textwidth}{!}{
	\begin{minipage}{\textwidth}
        \caption{Information extraction result}
        \label{tab:IEResultCounters}
		\begin{tabular}{| c | c | c | c | c | c | c | c |}
			\hline
			Document Group & Sentences & Sent. Pred & Phrases & Words & triples & Un. Termin. & Termin. \\
			\hline
			Management Documentation & 8929 & 4737 & 4192 & 109568 & 6497 & 5624 & 10121\\
			\hline
			Substitution Program & 1138 & 392 & 746 & 12527 & 464 & 870 & 1145\\
			\hline
			Experience records & 12405 & 5738 & 6667 & 137586 & 6042 & 8350 & 14450\\
			\hline
			Expert's profiles & 1157 & 275 & 882 & 9675 & 125 & 734 & 945\\
			\hline
		\end{tabular}
	\end{minipage}}
\end{table}

The ratio between the number of sentences and the number of triples and terminologies is provided in the table\ref{tab:IEResultRatio}. First columns contains the group name, followed by the ratio between all sentences and the number of triples, the ratio between sentences with predicate and triples, the ratio between all sentences and the number of unique terminologies and the last column contains the ratio between all sentences and the sum of all terminologies frequencies.

\begin{table}  
  \resizebox{0.8\textwidth}{!}{
	\begin{minipage}{\textwidth}
        \caption{Information extraction result}
        \label{tab:IEResultRatio}
		\begin{tabular}{| c | c | c | c | c |}
			\hline
			Document Group & Sentences:Triples & SentPred:Triples & Sentences:UniqueTerm & Sentences:Term\\
			\hline
			Management Documentation & 72.76\% & 137.15\% & 62.98\% & 113.24\%\\
			\hline
			Substitution Program & 40.77\% & 118.37\% & 76.45\% & 100.61\%\\
			\hline
			Experience records & 48.71\% & 105.30\% & 67.31\% & 116.48\%\\
			\hline
			Expert's profiles & 10.80\% & 45.45\% & 63.43\% & 81.67\%\\
			\hline
		\end{tabular}
	\end{minipage}}
\end{table}

The statistics shows that except the Management Documentation there is more sentences without a predicate then with a predicate. Triples from sentences cannot be extracted without a predicate and therefore the opportunity to extract entities with their relations is lost in these sentences. However if a sentence has a predicate, except the Expert's profiles PowerPoint presentation, almost in all sentences were found a full triple. The number over 100\% means that from these sentences were extracted more than 1 triple. Sentence is considered to be full if it has an originator (subject), predicate and the target (object). To determine why there are so many sentences without predicate (phrases) the NLP results has to be examined. The PowerPoint presentations have only short phrases and a lot abbreviations. It shows the fact that full sentences are missing in these presentations and the NLP can not do anything with that. However it IE has managed to get decent ratio (almost 82\%) between the number of sentences and extracted terminologies. Consider this, the terminologies are the most suitable to identify the PowerPoint presentations instead of triples. Management documentation, Substitution program and Experience records are Word and PDF (printed from Word) documents with large continuous textual parts. Even if the ratio between triples and all sentences is decent, documents contains huge amount of sentences without predicate. This might be a problem of the NLP or the full sentences are missing there. Looking into the results of NLP and documents it has been found that it is a mixture of both problems. 
The NLP tool Treex uses statistical machine learning. Models used in the Treex were trained on newspaper articles. The dependency trees of the sentences are suppose to be parsed from full sentences, or sentences close to being them. Any missing important part of speech in a sentence or any non grammatically correct sentence will lead into degenerated or incorrect dependency tree. Even if the humans would understand these sentence and could recognize the information in them, machines could not. The only possible solution how to fix it is that the person will write only full sentences and use less abbreviations to increase the efficiency of the NLP.
Another major problem with processing is, when the text is indented by bullets, parenthesis. The NLP can not work with informations stored in another sentence. Following example has been taken from one of the provided documents:

\begin{quote}
\emph{Podkladem pro přiznání a výpočet mimořádné odměny jsou:\\
- vyhodnocené Prezenční listiny nebo Záznamy o školení (např. v případě školicích dnů, profesních školení, apod.),\\
- vyplněné Třídní knihy (např. v případě specifické základní přípravy),\\
- potvrzené Záznamové listy (v případě obecné části stáže),\\
- protokoly z přezkoušení}
\end{quote}

For a human is easy to determine, which parts are related. But Treex is unable to process this structure. It cannot process it as one large sentence, because the text is fragmented into individual sentences. 

The above example can be rewrite into a model:

\begin{itemize}
	\item \textbf{A:}
	\begin{itemize}		
		\item \textbf{B}
		\item \textbf{C}
		\item \textbf{D}
		\item \textbf{E}
	\end{itemize}
\end{itemize}
The possible solution could be to reorganized the example by using regular expression and create sentences as A B, A C, A D, A E. However are we sure that the model is uniform across all documents and only A contains the predicate, subject and B,C,D,E only objects? We have tried different strategies, increasing number of full sentences for particular part of the text and in the same time decreasing number of full sentences in the same sentences. Without uniting the style of use of bullets we cant create a general strategy for solving this issue.
\begin{quote}
\emph{Lektor – zaměstnanec ČEZ, a. s., který na základě písemného pověření (Zadání lektorské činnosti) vystaveného útvarem RLZ realizuje odbornou problematiku formou teoretického školení určenému okruhu zaměstnanců.}
\end{quote}
This example shows that there is a missing predicate in the sentence, the predicate is replaced with a dash. Treex thinks that the dash is a delimiter and splits this example into two separate sentences. We do not even loose the sentence structure but also we loose the subject. To fix this problem we would need to replace the dash with the right predicate, in this example with verb is. The problem seems unsolvable at the moment due to the fact, that not even we have to choose the appropriate verb but also the correct inflexion of the verb. And the inflexion is rich in the Czech language. Another aspect of the example marks another fact. For NLP is hard to process very large clauses, that are consisted from main and subordinate clauses. The chance of hitting the right dependency tree is getting smaller with the increasing number of possible variations of the clause's dependency trees. The more simple the clause is the bigger chance of getting the right dependency tree is.
If we could fix or at least decrease the number of sentences in which we are loosing possible extracted informations, we could achieve higher number of extracted triples and terminologies. Not only higher number of extracted informations but we could also covered larger parts of the document. Seeing this problems still we were able to extract a solid number of informations to work with.
We have extracted text from various newspaper articles. The text was mostly consisted from large pieces of continuous texts. The Treex returned 85\% sentences with predicate and 15\% were phrases. Resulting IE achieved sentence triples ratio over 80\%. This implies that having a grammatically correct with continuous text documents can provides sufficient base for NLP and IE.