\chapter{Describing and storing the information}

In the previous chapters we have discussed what can we extract from the documents. What we need now is to find out how to store the entities, relations and terminology. Its not that simple because we need to create a model, a description of what extracted information means. In the first section we will try to find a proper language in that we will describe these information. With the language and the knowledge of extracted information we will create a describing model. We will finish this chapter with analysis and proposal of a proper data store.

\section{How to describe informations}

Lets assume we have extracted entities, relations and terminology with their frequency extracted from the document. Entity is a real life object, it can be person, thing, animal or phenomenon. Entity is in a relationship with another entity if a predicate exists in the sentence on which the entity is dependent. Lets imagine that we have hundreds of these entities with relations. We need to represent these entities and connections in machine readable form to allow the machine to manipulate with them. To manipulate with them means that the machine can store and search over them.

\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{./img/graphdb.png}
\caption{Data storage models. Taken from \cite{graphdb}}
\label{fig:gprah}
\end{figure}

Before searching for the proper language to code the data we need to look at how the data are organized. Three options are worth considering\ref{fig:gprah}. The concept of traditional relational database is that we have some primary key and other fields are depending on it. Is this our case? Although it seems that we have relations between entities we cannot determine which entity should be a primary key and which entity should depend on it. Because the entity can be as an originator and also as a target of a relation. Its not the way how we should look at the data. Do the entities have some hierarchical structure? Again the answer is no. Lets take an XML document, we have a root node and tree structure with other dependent notes. We cant say which entity should be the root entity on the top of the hierarchy and the rest is somehow dependent on it. Graph representation is consisted by nodes related to other nodes, with no node having more importance over the others. If we could represent our entities as nodes, the edge between pair of nodes would determine that they are in some relation, if we could label these edges to specify what kind of relation it represents then we have found a proper way how to describe and store information extracted from the document. This graph is nothing else then already mentioned Knowledge graph. What we need to do now is to find how to describe this graph in machine readable form.

\subsection{RDF definition}

If the graph data model is the model designed for information extracted from documents, RDF is the language in which is coded. The Resource Description Framework(RDF) is a family of specifications designed by the World Wide Web Consortium (W3C). It is used as a general method for modeling various information as resources on the Internet, using a variety of syntax notation and data serialization format. RDF is designed to be easily read and interchanged by computer applications. It uses URIs (Uniform Resource Identifier) to identify resources. Resource in our case is an entity extracted from the document, relation, terminology expression. The URI consist of prefix and the resource name. For example \url{"http://datlowe.org/semjobKB/data/document/term\#director"} has a prefix \url{"http://datlowe.org/semjobKB/data/document/term\#"} and resource name \emph{director}. In the same manner we will code other resources. A combination of a Resource, a Property and a Property value forms a \emph{statement} known as subject, predicate and object \emph{triple}. The subject denotes the resource, the predicate denotes traits or aspects of the resource and expresses a relationship between the subject and the object. The subject of an RDF statement is either a URI or a blank node, both of which denote resources. Resources indicated by blank nodes are anonymous resources. They are not directly identifiable from the RDF statement. The predicate is a URI which also indicates a resource, representing a relationship. The object is a URI, blank node or a Unicode string literal.

\begin{itemize}
	\item \texttt{Resource} is anything that can have a URI such as \url{"http://datlowe.org/semjobKB/data/document/term\#director"}
	\item \texttt{Property} is a Resource that has a name, such as \url{"http://datlowe.org/semjobKB/data/document/predicate\#hasSkill"} or \url{"http://datlowe.org/semjobKB/data/document/predicate\#coordinates"}
	\item \texttt{Property value} is a value of a Property (text, number) called \emph{Literal} or it can be another Resource. Example \url{"http://datlowe.org/semjobKB/data/document/term\#english"} or \url{"Mark O'Reily"}
\end{itemize}

\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{./img/tripleExample.png}
\caption{RDF triple example}
\label{fig:triple}
\end{figure}

A collection of RDF statements represents a labeled and directed graph. The RDF graph can be stored in any relational database but its rather stored in a native representation called \emph{Triplestore}. RDF uses several serialization formats, most common is \emph{RDF\\XML, N3, Turtle}. We will use \emph{Turtle} which is compact, human-friendly format. We will describe the database in a later section. We have described what is RDF, triple, resource, URI, Literal. We have to answer now the question, how we will code our extracted information.\cite{W3C}

\subsection{RDF Schema definition}
Is a set of classes with certain properties using the RDF extensible knowledge representation language, providing basic elements for the description of \emph{ontologies}, otherwise called RDF vocabularies. It is used to structure RDF resources, to create a model and express relations between resources in the RDF document. Based on the RDF Schema we can have an overview over the knowledge that we are coding using RDF. As mentioned in the definition, ontology is a set of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts. For more details please look at \url{http://www.w3.org/TR/rdf-schema/}. The ontology based on information described in the previous subsection will be in detail described in the implementation part of the thesis.

\subsection{How to code extracted information}

We have a document from which we have extracted information. It is a set of triples (subject, predicate, object), set of terminology multi word phrases with the frequency in which they occur in the document. Each of the matched words have textual representation and lemma. Due to the linguistic inflection, a same object can have multiple textual representation, but all these words have a common lemma. To uniquely identify the entity (resource) we need to use lemma as a representation of the resource. URI will therefor have a prefix followed by a lemma of the resource. For multi word resource we need to replace the white space between the words with underscore character, \url{"http://datlowe.org/semjobKB/data/document/terminology\#sale\_department"}. URI are case sensitive and we need to convert every lemma to lower case. Every extracted part of the triple has to be transformed from its textual representation into the URI form by using its lemma. The frequency of a terminology is a positive number, we do not need to uniquely identify this number, therefore it can be coded as Literal.

\section{Modeling our informations}
\label{sec:Ontology}
The extracted entities can be grouped into 3 categories, terms as subject and object, because these two entities can appear on both positions in the triple and they are nouns. The second group is consisted from predicates, because predicate has to be a verb. Terminology has at least 1 noun as a root and dependent adjectives that develop the noun. Some of the terminology can appear as a subject or object, however they contain different information for us and therefore should be separated into a unique group. This groups are common across all documents and each member of the groups can appear in multiple documents. As we have spoken, one graph is consisted of numerous triples. Each of the document has to have a unique graph that will contain extracted information. In the previous section we have described how to code individual entities but we need to find out how to code the whole document and lately the whole extracted domain.

\subsection{Representing a subject, predicate, object}
Each of the words has numerous textual representation with 1 lemma. By using its lemma we will create a unique URI for a specific resource. But we cant invoke linguistic processing each time we will need to get a lemma for a word and from that lemma the URI. We have to code it all at once. We will create one graph that will contain all textual representations of the subjects and objects. We will form a new triple where on subject position will be the subject or object URI, we will create property \emph{prefix\#hasTextualRepresentation} and the property value will be a literal containing the textual representation of the entity (subject, object) extracted from the document. Same will be for a lemma, that will store the precise textual representation of the lemma. In this way we can store all subject and objects at one place. We have not mentioned yet that each document will have its own unique graph. This approach much more simplifies the complexity of storing and searching over extracted data. We could store everything on on place, but logically, one document is one entity one resource containing smaller peace of information (triples, terminology) about the document. However we need to store all textual representations of the entities found across all document. Therefore we need to have one graph for each group that we have mentioned earlier. To recapitulate all above. There has to be one graph containing lemma and textual representation for the subject and objects and another one for predicates. When we would need to get a resource URI for a given word, we will look into this graph and we will search for a resource URI that has on a object position text that matches a given word. By that we can store in the document graph only URIs making the search easier. How to search over RDF graph will be explained in the next chapter.

\subsection{Terminology representation}
An extracted terminology is multi word phrase with frequency in which it appeared in the document. Textual representation and lemma should be treated in the same manner as with subject, predicate and object and so stored all at one place. The frequency is important information and has to remain in the document's graph. But we can also use this information and store the overall frequency of some terminology over all document together with the number of documents in that appeared. From this knowledge we can decide if some terminology is unique for some document or its common terminology that has small importance. Each document graph will have terminology URI on a subject position, then predicate \emph{prefix\#hasFrequency} and the literal with the frequency value. And at one place we will store the all textual representations of terminologies and with their lemma, overall frequency, number of documents in which they are presented. This approach separates the common and unique information regarding a document and simplifies the knowledge representation and information redundancy in the graphs. 

\subsection{Document representation}
Each of the extracted triple will have stored its textual representation in one graph and therefore we need to store only the entities' URIs in the document. One example can be:
\url{"http://datlowe.org/semjobKB/data/document/term\#director"} \url{"http://datlowe.org/semjobKB/data/document/predicate\#speak"} \url{"http://datlowe.org/semjobKB/data/document/term\#english"}.
That information expresses that a director has a ability to speak English. To write RDF using Turtle syntax, each triple has to be on new line and has to end with a dot. The set of triples can then be stored in a database. In a same way we will add terminology to the document's graph by creating triples and adding them. The triple will contain the terminology URI, predicate \emph{hasFrequency} and literal containing the frequency number. Example:
\url{"http://datlowe.org/semjobKB/data/document/terminology\#sale\_department"} \url{"http://datlowe.org/semjobKB/ontology/document\#hasFrequency"} \textquotedblleft5\textquotedblright.
That means that the document contains 5 phrases with the lemma \emph{sale department}. As you might noticed, we do not store which phrase exactly is in the document. We can add the textual representation in the document graph, however we do not need to which instance of the resource is in the document. We know that the resource, the entity is in the document and that is on what we are focusing. On the knowledge not on the instance of that knowledge. To retrieve a document from which a graph was created we need to add a triple which will contain the path to the document and also the path to the document in a plain text to display the text regardless the origin of the document. Also some statistical information as number of triples, unique triples, predicates, terms, sentence count would be useful. The triples and the terminology extracted and coded into the RDF using Turtle syntax then can be stored in the document and we have not loss any information in this phase.  We do not have to forget on blank nodes. Sometimes we can only extract a part of a triple, for example there is a missing object. We do not want to loose at least this partial information therefore we will use a special resource \emph{prefix\#blank} URI to make it clear that the triple is incomplete. Everything what was extracted will be stored in the database and overall knowledge as total frequency and textual representation over the documents will be extended.

\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{./img/modelKnowledge.png}
\caption{Model overview}
\label{fig:model}
\end{figure}

\section{Searching for a proper database}
The graphs created from the knowledge acquired from the documents has to be persisted in the database to be able to search over them. The triples can be stored in the relational database however the database is not designed for these kind of data. A \textbf(triplestore) is a purpose-build database for the storage and retrieval of triples. Like a relational database, one stores data and retrieves it vie query language. A triplestore is optimized for the storage and retrieval of triples. Moreover triples can be imported and exported using RDF. There are several implementations of triplestores:
\begin{itemize}
	\item \textbf{FUSEKI} is a HTTP server that allows SPARQL query language to PUT/POST/DELETE RDF triples. It runs only in memory, those making it unsuitable for permanent persistence of graphs.
	\item \textbf{Oracle} is an open, standards-based, scalable, secure, reliable and performant RDF management platform. Based on a graph data model, RDF data (triples) are persisted, indexed and queried, like other object-relational data types.\cite{Oracle}
	\item \textbf{OpenRDF Sesame} is a de-facto standard framework for processing RDF data. This includes parsers, storage solutions, reasoning and querying, using the SPARQL query language. It offers a flexible and easy to use Java API that can be connected to all leading RDF storage solutions.\cite{OpenRDF}
	\item \textbf{OpenLink Virtuoso} is a middleware and database engine hybrid that combines the functionality of a traditional RDBMS, ORDBMS, virtual database, RDF, XML, free-text, web application server and file server functionality in a single system.\cite{Virtuoso}
\end{itemize}
Our requirements are pretty simple, we need to permanently persist graphs in the database, update them when using different search strategies and to search over them. The database has to be able to store hundreds of graphs. The performance is another important aspect. Hundreds of graphs will consume large space and therefore in-memory database is not and option, because treex consumes a large amount of memory when running, those we need to leave a space for it. The database has to also provide indexing for faster query searching. Oracle and Virtuoso are suitable for the purpose. From previous experience and Virtuoso and Jena Java API the Virtuoso is a perfect candidate. Virtuoso solution contains \emph{Virtuoso Conductor} that is a Web Based Database Administration User Interface trough that we can access to the graph, use SPARQL query language (will be explained in the next chapter). \emph{Virtuoso Jena API} is a API to access database directly from Java project, those making it simple to interact with the database, more details in implementation section.

\section{Summary}
We have elaborated the possible representations of the extracted information. We found out that the RDF is designed for describing and modeling information and its suitable for our task. We have discussed how to model our extracted information and how we will create and store graphs. In the end of the chapter we have searched and select appropriate database engine. The only remaining research has to be done upon designing a search engine over the graphs.