\chapter{Describing and storing the information}

Previous chapters have elaborated what can be extracted form documents. This chapter will analyze what are the principles of Linked data and how to use them to create the knowledge representation and store it. The knowledge has to represent extracted entities, relations and terminologies. The first section will discuss Linked Data principles. Based on the principles an overview about RDF will follow. The rest of this chapter is about how to represent extracted knowledge from the provided documents.


\section{Linked Data principles}
The term Linked Data refers to a set of best practices for publishing and interlinking structured data on the Web. These best practices were introduced by Tim Berners-Lee in his Web architecture note Linked Data \cite{Linked} and have become known as the Linked Data principles. These principles are the following:

\begin{itemize}
	\item Use URIs to denote things
	\item Use HTTP URIs so that these things can be referred to and looked up ("dereferenced") by people and user agents.
	\item Provide useful information about the thing when its URI is dereferenced, leveraging standards such as RDF, SPARQL.
	\item Include links to other related things (using their URIs) when publishing data on the Web.
\end{itemize}

Tim Berners-Lee gave a presentation on linked data at the TED 2009 conference.\cite{TED} In it, he restated the linked data principles as three simple rules:

\begin{itemize}
	\item All kinds of conceptual things, they have names now that start with HTTP.
	\item If I take one of these HTTP names and I look it up [..] I will get back some data in a standard format which is kind of useful data that somebody might like to know about that thing, about that event.
	\item When I get back that information it's not just got somebody's height and weight and when they were born, it's got relationships. And when it has relationships, whenever it expresses a relationship then the other thing that it's related to is given one of those names that starts with HTTP.
\end{itemize}

This simple rules denotes that the extracted resources do not have to be only used to search for a document, but also stands for unique things. And these things can be linked to external data sets. These data sets can contain additional information and relations with other things creating and infinite web of knowledge.

\subsection{RDF definition}

If the graph data model is the model designed for information extracted from documents, RDF is the language in which is coded. The Resource Description Framework(RDF) is a family of specifications designed by the World Wide Web Consortium (W3C). It is used as a general method for modeling various information as resources on the Internet, using a variety of syntax notation and data serialization format. RDF is designed to be easily read and interchanged by computer applications. It uses URIs (Uniform Resource Identifier) to identify resources. Resource in our case is an entity extracted from the document, relation, terminology expression. The URI consist of prefix and the resource name. For example \url{"http://datlowe.org/semjobKB/data/document/term\#director"} has a prefix \url{"http://datlowe.org/semjobKB/data/document/term\#"} and resource name \emph{director}. In the same manner we will code other resources. A combination of a Resource, a Property and a Property value forms a \emph{statement} known as subject, predicate and object \emph{triple}. The subject denotes the resource, the predicate denotes traits or aspects of the resource and expresses a relationship between the subject and the object. The subject of an RDF statement is either a URI or a blank node, both of which denote resources. Resources indicated by blank nodes are anonymous resources. They are not directly identifiable from the RDF statement. The predicate is a URI which also indicates a resource, representing a relationship. The object is a URI, blank node or a Unicode string literal.

\begin{itemize}
	\item \texttt{Resource} is anything that can have a URI such as \url{"http://datlowe.org/semjobKB/data/document/term\#director"}
	\item \texttt{Property} is a Resource that has a name, such as \url{"http://datlowe.org/semjobKB/data/document/predicate\#hasSkill"} or \url{"http://datlowe.org/semjobKB/data/document/predicate\#coordinates"}
	\item \texttt{Property value} is a value of a Property (text, number) called \emph{Literal} or it can be another Resource. Example \url{"http://datlowe.org/semjobKB/data/document/term\#english"} or \url{"Mark O'Reily"}
\end{itemize}

\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{./img/tripleExample.png}
\caption{RDF triple example}
\label{fig:triple}
\end{figure}

A collection of RDF statements represents a labeled and directed graph. The RDF graph can be stored in any relational database but its rather stored in a native representation called \emph{Triplestore}. RDF uses several serialization formats, most common is \emph{RDF\\XML, N3, Turtle}. We will use \emph{Turtle} which is compact, human-friendly format\cite{W3C}.

\subsection{RDF Schema definition}
Is a set of classes with certain properties using the RDF extensible knowledge representation language, providing basic elements for the description of \emph{ontologies}, otherwise called RDF vocabularies. It is used to structure RDF resources, to create a model and express relations between resources in the RDF document. Based on the RDF Schema. As mentioned in the definition, ontology is a set of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts. For more details please look at \url{http://www.w3.org/TR/rdf-schema/}. The ontology based on information described in the previous subsection will be in detail described in the implementation part of the thesis.

\subsection{How to code extracted information}

Extracted information from each document contains set of triples (subject, predicate, object) and set of terminologies. Terminologies are multi word phrases with frequencies in which they occur in the document. Each of the matched word has a textual representations and lemma. Due to the linguistic inflection, a same object can have multiple textual representations, but all these words have a common lemma. To uniquely identify the entity (resource) only lemma as a unique representation of the resource has to be used. URI will therefore have a prefix followed by a lemma of the resource. For multi word resource URI the white space between the words has to be replaced with underscore character because the URI does not allow white space. Example: \url{"http://datlowe.org/semjobKB/data/document/terminology\#sale\_department"}. URI are case sensitive and every lemma needs to be converted to lower case. Every extracted part of the triple has to be transformed from its textual representation into the URI form by using its lemma. The frequency of a terminology is a positive number and can be coded as Literals.

\section{Modeling knowledge}
\label{sec:Ontology}
The extracted entities can be grouped into 3 categories.
\begin{itemize}
	\item Terms as subject and object - because these two entities can appear on both positions in the triple
	\item Predicates - because they denotes relation and cannot be terms or terminologies
	\item Terminology - Some of the terminology can appear as a subject or object, however they contain different information and therefore should be separated in a unique group.
\end{itemize}

This groups are common across all documents and each member of the groups can appear in multiple documents.

\subsection{URI Prefix}
As mentioned URI consists of a prefix and resource name. The prefixes are used to categorize resources based on the model. All the resources extracted from documents are information resources. The common prefix for all resources used in this thesis is \url{"http://datlowe.org/semjobKB/data/document/}. By parsing the URI prefix following information can be obtained:
\begin{itemize}
	\item Domain - \emph{datlowe.org}
	\item Project - \emph{semjobKB}
	\item Type - \emph{data}, denotes that the resource contains information
	\item Category I - \emph{document}, denotes that the data is related with documents
	\item Category II - resources are categorized in \emph{term, predicate, terminology}
\end{itemize}

This structure will allow to uniquely distinguish each resource and determine where it belongs. 

\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{./img/modelKnowledge.png}
\caption{Model overview}
\label{fig:model}
\end{figure}


\subsection{Representing a subject, predicate, object}
Each of the words has numerous textual representation with 1 lemma. By using its lemma a unique URI for a specific resource will be created. Linguistic processing cannot be invoked each time a lemma for a word is needed and from that lemma get the resource URI. It has to be coded at once. One graph that will contain all textual representations and lemma of subjects and objects. We will form a new triple where on subject position will be the subject or object URI, we will create property \emph{prefix\#hasTextualRepresentation} and the property value will be a literal containing the textual representation of the entity (subject, object) extracted from the document. Same will be for a lemma, that will store the precise textual representation of the lemma. This will allow to store all common information about terms at one place. Possible textual representation and lemmas are common across the whole domain. This fact justifies storing this knowledge in one graph rather then having them in different graphs. In the same way will be stored predicates.

\subsection{Terminology representation}
An extracted terminology is multi word phrase with frequency in which it appeared in the document. Textual representation and lemma should be treated in the same manner as with subject, predicate and object.Stored one graph. The frequency of a terminology is unique for each graph and cannot be stored in this graph. However overall frequency of some terminology over all document together with the number of documents in that appeared is a useful information and is unique for all documents. With this knowledge it can be decided if some terminology is unique for some document or if it is common terminology with small importance. Each document graph will have terminology URI on a subject position, then predicate \emph{prefix\#hasFrequency} and the literal with the frequency value. And at one place we will store the all textual representations of terminologies and with their lemma, overall frequency, number of documents in which they are presented. This approach separates the common and unique information regarding a document and simplifies the knowledge representation and information redundancy in the graphs. 

\subsection{Document representation}
Each document contains its unique set of triples. For that reason each document will have its own graph. Each of the extracted triple will have its textual representation stored in one graph. This will allow to have in a document graphs only the resource URIs. The document would then contain only set of triples. Extracted triple from a document \emph{Director speaks English} would look in the graph as:
\url{"http://datlowe.org/semjobKB/data/document/term\#director"} \url{"http://datlowe.org/semjobKB/data/document/predicate\#speak"} \url{"http://datlowe.org/semjobKB/data/document/term\#english"}.
Each documents has its unique set of terminologies. Each terminology has its frequency. Both, terminology and frequency is unique for each document and has to be stored in the same graph as triples. For example:
\url{"http://datlowe.org/semjobKB/data/document/terminology\#sale\_department"} \url{"http://datlowe.org/semjobKB/ontology/document\#hasFrequency"} \textquotedblleft5\textquotedblright.
It expresses that the document contains 5 phrases with the lemma \emph{sale department}. Which phrase exactly is in the document will not be stored in the document. Textual representation could be added in the graph. But it is not required which instance of the resource is in the document. Important is that the resource (knowledge) is in the document. Statistical information as number of triples, unique triples, predicates, terms, sentence are unique for each document.

\section{Searching for a proper database}

The graphs created from the information acquired from the documents has to be persisted in the database to be able to search over them. The triples can be stored in the relational database however the relational database is not designed for these kind of data. A \textbf(triplestore) is a purpose-build database for storing and retrieving of triples. Like a relational database, one stores data and retrieves it vie query language. Moreover triples can be imported and exported using RDF. There are several implementations of triplestores:
\begin{itemize}
	\item \textbf{FUSEKI} is a HTTP server that allows SPARQL query language to PUT/POST/DELETE RDF triples. It runs only in memory, those making it unsuitable for permanent persistence of graphs.
	\item \textbf{Oracle} is an open, standards-based, scalable, secure, reliable and performant RDF management platform. Based on a graph data model, RDF data (triples) are persisted, indexed and queried, like other object-relational data types.\cite{Oracle}
	\item \textbf{OpenRDF Sesame} is a de-facto standard framework for processing RDF data. This includes parsers, storage solutions, reasoning and querying, using the SPARQL query language. It offers a flexible and easy to use Java API that can be connected to all leading RDF storage solutions.\cite{OpenRDF}
	\item \textbf{OpenLink Virtuoso} is a middleware and database engine hybrid that combines the functionality of a traditional RDBMS, ORDBMS, virtual database, RDF, XML, free-text, web application server and file server functionality in a single system.\cite{Virtuoso}
\end{itemize}
Requirement are as follows. Permanently persist graphs in the database, update them when using different search strategies and search over them. The database has to be able to store hundreds of graphs. The performance is another important aspect. Hundreds of graphs will consume large space and therefore in-memory database is not an option. This constraint removes the \emph{FUSEKI} database from the possible options. The database has to provide indexing for faster query searching. Oracle, OpenRDF Sesame and Virtuoso are providing the same services and can be selected for the purpose of this thesis. All of them provide Java API to interact with the database from the application code. Due to a previous experience with the Virtuoso database as a part of the Software Project subject done at the Faculty of Mathematics and Physics it will be selected as a data storage for the experimental application. Virtuoso solution contains \emph{Virtuoso Conductor} that is a Web Based Database Administration User Interface trough that graphs can be accessed, provides SPARQL query language (will be explained in the next chapter). \emph{Virtuoso Jena API} is a API to access database directly from Java project, those making it simple to interact with the database. 

\section{Summary}
We have elaborated the possible representations of the extracted information. We found out that the RDF is designed for describing and modeling information and its suitable for our task. We have discussed how to model our extracted information and how we will create and store graphs. In the end of the chapter we have searched and select appropriate database engine. The only remaining research has to be done upon designing a search engine over the graphs.