\chapter{Requirements}
The chapter will discuss requirements on the system as well as on each task in relation with goals of this thesis.

\section{Functional requirements}

\subsection{Documents preprocessing}
The software has to be able to extract informations from documents of various types (Microsoft Word, Excel, Power-point, PDF). This task includes extraction of a plain text from these documents. The reason is that linguistic tool \emph{Treex} can operate only with the plain text. Linguistic tool processes sentences and creates a dependency tree for each of the sentence. Tables and enumerations cannot be consider as sentences, therefore some preprocessing will have to be made to convert these structure in textual form. Probably it would not create a full sentence without a predicate. Problems in this task is plain text extraction and preprocessing of input texts.

\subsection{Linguistic processing}
Having the plain text of the documents, the application will have to invoke linguistic processing. \emph{Treex} was developed for the Linux platform. The application will have to run on a Linux as well or the invocation will have to be externalized to remove this constraint. Possible options are to make the invocation as a Web service or use Remote method invocation(RMI). Linguistic processing is a resource hungry process, should be externalized and run on a fast machine. Therefore RMI would be the best option to make it easily accessible and allow it to run on a different machine than the application itself.

\subsection{Process linguistic results}
Result of the linguistic process is printed in specific format on a standard output or in a file. The output has a predefined structure and can be easily processed by machines. The dependency analysis of the sentence can be abstracted as a tree. In the output format each word of a sentence is on a separate line. Searching directly over the output would not be adequate. Loading and converting the output in an internal structure that could be accessed as a tree would make the search much more easier to represent. Also the development of patterns based on matching subtrees in the dependency tree will provide user much more better experience.

\subsection{Search rules}
The application has to provide a set of search patterns for IE. There are no patterns that are universal and precise. Therefore user will need to have a option to create or alter existing. Or the application will use machine learning to improve their strength by itself. It will be more appropriate to give a user control over the creation of patterns. Give him the possibility to test various extraction approaches. User will have an option to import and export IE search patterns in and from the application. Apart of search pattern an search algorithm will have to be created. As and input it will take list of pattern, processed document and will return extracted data. It will be about matching subtree as pattern to a sentence dependency tree. That will allow user an easier understanding of the problematic as well as easier way how to modify patterns.

\subsection{Create knowledge representation and store it}
Having the extracted information an appropriate representation of extracted data has to be defined. The goal of this thesis is to represent the data based on the principles of Linked Data. Linked Data will be described later, but from the specification of it, all resources (data) are coded in RDF (Resource description framework) format. The same format will be used in this thesis. The reason why will be provided in the chapter related with the knowledge representation. Apart of that a data store has to be selected, that will allow to store extracted knowledge and semantic meaning of it. 

\subsection{Create search engine}
The application will have to provide tools how to allow user to search and retrieve information that he desires. The application will need to be interactive with the user. From that reason GUI(Graphic user interface) would be much more suitable than the command line.

\section{Other requirements}
Even though the application is an experimental, future usability and extendibility has to be considered as well. Application will operate with a database, linguistic tool and will provide search engine for IE and documents. It should serve many people. The options are available, stand alone application or make the application as web service. Modern applications are build as a web service due to the ability to implement and deploy new features faster than to distribute updates. It will help also to centralize logic, document and knowledge management. From that reason the application will run on a application server with the possibility of data store and linguistic processing running on a different machine.

\section{Experimental evaluation}
The goal is to gather statistical informations about the information extracted from documents. The extraction is not just about numbers, a consideration about their actual information gain for a user has to be taken into account. The purpose of the information extraction is not to extract every possible knowledge, but to extract and mark the key elements and relations stored in a document. This cannot be measured by any metric since it is a matter of user subjectivity. An observation would have to be made to compare the extracted knowledge and actual knowledge stored in the document.