\chapter{Results evaluation}
In this chapter we will provide evaluations of the IE processed over the testing documents. We will describe the problems we have encountered and possible solutions.

\section{Results of the thesis}
We have applied the search rules on the documents provided to us with following results in a table\ref{tab:IEResultCounters}. The columns are as follow: document group, sentences, sentences with predicate presented, sentences without predicate, words count, triples count, unique terminology count, overall terminology frequency.

\begin{table}  
  \resizebox{0.8\textwidth}{!}{
	\begin{minipage}{\textwidth}
        \caption{Information extraction result}
        \label{tab:IEResultCounters}
		\begin{tabular}{| c | c | c | c | c | c | c | c |}
			\hline
			Document Group & Sentences & Sent. Pred & Phrases & Words & triples & Un. Termin. & Termin. \\
			\hline
			Management Documentation & 8929 & 4737 & 4192 & 109568 & 6497 & 3487 & 5653\\
			\hline
			Substitution Program & 1138 & 392 & 746 & 12527 & 464 & 661 & 821\\
			\hline
			Experience records & 12405 & 5738 & 6667 & 137586 & 6042 & 6225 & 7920\\
			\hline
			Expert's profiles & 1157 & 275 & 882 & 9675 & 125 & 648 & 867\\
			\hline
		\end{tabular}
	\end{minipage}}
\end{table}

We will show now the ratio between the number of sentence and the number of triples and terminologies, see table\ref{tab:IEResultRatio}. First columns contains the group name again, then the ratio between all sentences and number of triples, the ratio between sentences with predicate a triples, the ratio between all sentences and number of unique terminologies and the last column contains the ratio between all sentences and overall terminology frequencies.

\begin{table}  
  \resizebox{0.8\textwidth}{!}{
	\begin{minipage}{\textwidth}
        \caption{Information extraction result}
        \label{tab:IEResultRatio}
		\begin{tabular}{| c | c | c | c | c |}
			\hline
			Document Group & Sentences:Triples & SentPred:Triples & Sentences:UniqueTerm & Sentences:Term\\
			\hline
			Management Documentation & 72.76\% & 137.15\% & 39.05\% & 63.31\%\\
			\hline
			Substitution Program & 40.77\% & 118.37\% & 58.08\% & 72.14\%\\
			\hline
			Experience records & 48.71\% & 105.30\% & 50.18\% & 63.85\%\\
			\hline
			Expert's profiles & 10.80\% & 45.45\% & 56.01\% & 74.94\%\\
			\hline
		\end{tabular}
	\end{minipage}}
\end{table}

From the statistics we can see that except the Management Documentation there is more sentences without predicate then with predicate. We cannot extract triples from sentences without predicate and therefore loosing the opportunity to extract informations in a form of triples. However if sentences have predicates, except the Expert's profiles PowerPoint presentation, we are able to extract almost all triples. The number over 100\% means that from these sentences we were able to extract more then 1 triple. We consider full sentence as sentence having the originator (subject), predicate and the target (object). The more of these sentences we would have in the result of NLP the more we can extract from these documents. To determine why there are so many sentences without triples and also the sentences without predicate (phrases) we have to look into the results of the NLP. The PowerPoint presentations have only short phrases and a lot abbreviations. That shows us the fact that full sentences are missing in these presentations and the NLP can not do anything with that. However we have managed to get decent ratio (almost 75\%) between the number of sentences and extracted terminologies. Consider this, the terminologies are most suitable to describe the PowerPoint presentations instead of triples. Management documentation, Substitution program and Experience records are Word and PDF (printed from Word) documents with large continuous textual parts. Even if the ratio between triples and all sentences are decent, we have huge amount of sentences without predicate. This might be a problem of NLP or the full sentences are missing there. Looking into the results of NLP and documents we have found that its a mixture of both. 
The NLP tool Treex uses statistical machine learning and models used in the Treex were trained on newspaper articles. The dependency trees of the sentences are considered to be parsed from the full sentences, or sentences close to being them. Any missing part of the sentence or any non grammatically correct sentence leads into wrong or grammatically wrong dependency tree. Even if the humans understand them and can recognize the information which are carried there the machine could not. The only solution is that the person who writes the document will know that the document will be processed by a machine. The person will need to write full sentences and use less abbreviations to increase the efficiency of the NLP.
Another great problem with processing the text is, that it contains a lot of information stored in texts indented by bullets. The NLP can not work with informations stored in another sentence that has direct impact in the currently processed sentence. Example:
\begin{quote}
\emph{Podkladem pro přiznání a výpočet mimořádné odměny jsou:\\
- vyhodnocené Prezenční listiny nebo Záznamy o školení (např. v případě školicích dnů, profesních školení, apod.),\\
- vyplněné Třídní knihy (např. v případě specifické základní přípravy),\\
- potvrzené Záznamové listy (v případě obecné části stáže),\\
- protokoly z přezkoušení}
\end{quote}
Treex is unable to process this structure and mark full sentences from them, because text is fragmented into individual sentences. The above example can be rewrite into a model:
\begin{itemize}
	\item \textbf{A:}
	\begin{itemize}		
		\item \textbf{B}
		\item \textbf{C}
		\item \textbf{D}
		\item \textbf{E}
	\end{itemize}
\end{itemize}
The possible solution could be to reorganized the example by using regular expression and create sentences as A B, A C, A D, A E. However are we sure that the model is uniform across all documents and only A contains the predicate, subject and B,C,D,E only objects? We have tried different strategies, increasing number of full sentences for particular part of the text and in the same time decreasing number of full sentences in the same sentences. Without uniting the style of use of bullets we cant create a general strategy for solving this issue.
\begin{quote}
\emph{Lektor – zaměstnanec ČEZ, a. s., který na základě písemného pověření (Zadání lektorské činnosti) vystaveného útvarem RLZ realizuje odbornou problematiku formou teoretického školení určenému okruhu zaměstnanců.}
\end{quote}
This example shows that there is a missing predicate in the sentence, the predicate is replaced with a dash. Treex thinks that the dash is a delimiter and splits this example into two separate sentences. We do not even loose the sentence structure but also we loose the subject. To fix this problem we would need to replace the dash with the right predicate, in this example with verb is. The problem seems unsolvable at the moment due to the fact, that not even we have to choose the appropriate verb but also the correct inflexion of the verb. And the inflexion is rich in the Czech language. Another aspect of the example marks another fact. For NLP is hard to process very large clauses, that are consisted from main and subordinate clauses. The chance of hitting the right dependency tree is getting smaller with the increasing number of possible variations of the clause's dependency trees. The more simple the clause is the bigger chance of getting the right dependency tree is.
If we could fix or at least decrease the number of sentences in which we are loosing possible extracted informations, we could achieve higher number of extracted triples and terminologies. Not only higher number of extracted informations but we could also covered larger parts of the document. Seeing this problems still we were able to extract a solid number of informations to work with.
We have extracted text from various newspaper articles. The text was mostly consisted from large pieces of continuous texts. The Treex returned 85\% sentences with predicate and 15\% were phrases. Resulting IE achieved sentence triples ratio over 80\%. This implies that having a grammatically correct with continuous text documents can provides sufficient base for NLP and IE.

\section{Summary}
During implementation and testing of the IE on the documents, we have encountered significant losses during the NLP of the texts. This problems have been described and possible solutions proposed. The performance for extracting informations is sufficient, the only bottleneck here is the performance of Treex. With the extracted information we can advance to the next chapter.