\chapter{Related Work}
\label{sec:rw-intro}

In this chapter, relevant work related to the thesis topic is briefly presented. In Section 
~\ref{sec:rw-lp} we introduce important logic programming concepts, and in Section~\ref{sec:rw-ilp} we discuss in
more details how Inductive Logic Programming (ILP) works. Later, in Section~\ref{sec:rw-arm}, we briefly present
association rule mining, and highlight concepts relevant to this thesis. In Section~\ref{sec:rw-discretization}, we
talk about discretization of numerical attributes and discuss the most popular unsupervised methods. After that, we
present in Section~\ref{sec:rw-miningOptimizedRules} a method for finding optimized numerical intervals in association
rules. In Section~\ref{sec:rw-infotheoreticmeasures}, we introduce the information-theoretical measures used in this
thesis, and finally, in Sections~\ref{sec:rw-semanticWeb} and~\ref{sec:rw-lod}, we give a quick overview about Semantic
Web and Linked Open Data.


\section{Logic Programming}
\label{sec:rw-lp}

In this Section, basic logic programming and deductive database terminology, such as \emph{literal}, \emph{clause},
\emph{program clause}, \emph{Datalog clause} and \emph{hypothesis}, is presented. We use terminology and
definitions described in \citet{DBLP:books/sp/Lloyd87}, and \citet{LavracDz94}.

Firstly, it is important to
mention that variables are represented as uppercase letters followed by a string of lowercase letters and/or digits.
Function and predicate symbols (including constants) are represented as lowercase letters also followed by a string of
lowercase and/or digits.

According to~\citet{DBLP:books/sp/Lloyd87}, an \emph{atomic formula} $L$ is a predicate symbol followed by a bracketed
n-tuple of \emph{terms}, a \emph{term} can be a variable or a function symbol followed by a bracketed n-tuple of terms,
a \emph{constant} is a function symbol of arity 0. So for example, if $f$, $g$, and $h$ are function symbols and $X$ a
variable, then $g(X)$ is term, $f(g(X),h)$
is also a term, and $h$ is a constant.

A \emph{literal} is an \emph{atomic formula} which can be negated or not. So both $L$ and its negation $\overline{L}$
are literals for any \emph{atomic formula} $L$. A clause $c$ is a disjunction of literals, for example:
\begin{center}
  $c=(L_1 \vee L_2 \vee \ldots \overline{L_{i}} \vee \overline{L_{i+1}} \vee \ldots) \equiv
 L_1 \vee L_2 \vee \ldots \leftarrow L_i \wedge L_{i+1} \wedge \ldots$
\end{center}

Such disjunction of literals can also be written in following way:
\begin{center}
 $\{L_1,L_2,\ldots,\overline{L_i},\overline{L_{i+1}},\ldots\}$ \\
$ L_1,L_2,\ldots \leftarrow L_i,L_{i+1},\ldots$
\end{center}

A \emph{program clause} is a clause which contains exactly one positive literal. That is, it has the form:
\begin{center}
 $\underbrace{T}_{head} \leftarrow \underbrace{L_1,L_2,\ldots}_{body}$
\end{center}

A \emph{Datalog clause} is a program clause with no function symbols with arity different from zero. That
means that only variables and constants can be used as predicate arguments. A Datalog clause is considered \emph{safe}
if all the variables present in the head literal $T$ are also present in the body. Moreover, it may allow negated
literals in the body, as long as every variable existent in a negated body literal are also present in a
non-negated body literal.

Besides that, it is important to mention that Datalog rules support not only relational, but also
arithmetical predicates (such as $=$, $\neq$, $>$, $<$, $\geq$, $\leq$), which improves the expressiveness of the
language allowing that, for instance, numerical attribute variables have their domain restricted to a specific
interval, which is required in this thesis.

\section{Inductive Logic Programming}
\label{sec:rw-ilp}

Inductive Logic Programming (ILP) is a rule learning method that combines machine learning with the
logic programming representation. Given a known background knowledge and a set of training examples represented as a
logical database of facts (literals without any variables), an ILP system will learn a hypothesis in the form of a
logic program.

The clauses in a hypothesis aim at recognizing a concept $\mathcal{C}$, which is defined as a subset of objects
$\mathcal{C} \subseteq \mathcal{U}$. For example, $\mathcal{U}$ may be the set of all students in an University and
$\mathcal{C}$ all the students studying Informatics. Concepts and objects are represented in a description language
$\mathcal{L}$, typically a subset of first-order logic.

The training data for learning a concept $\mathcal{C}$ is a set of examples $\mathcal{E}$, where each examples is a
grounded fact labeled as positive ($\oplus$) if the object is an instance of $\mathcal{C}$, or negative ($\ominus$)
otherwise. We denote the set of positive examples as $\mathcal{E}^{+}$ and the set of negative examples as
$\mathcal{E}^{-}$.

The background knowledge $\mathcal{B}$ is a prior knowledge which contributes in learning the hypothesis. It
indirectly
restricts the hypothesis search space, as  the learned hypothesis $\mathcal{H}$ should be consistent with the
background
knowledge as well as the training examples.

As defined in \citet{DBLP:journals/ml/LavracD96}, a hypothesis $\mathcal{H}$ covers an example $e$ given a
background knowledge $\mathcal{B}$ ($covers(\mathcal{B} \cup \mathcal{H},e)$) if the example satisfies the hypothesis
and background knowledge. In logic programming, where $\mathcal{H}$ is a set of program clauses and an example is a
ground fact, it means that $e$ is entailed by $\mathcal{B} \cup \mathcal{H}$.

\begin{center}
 $covers(\mathcal{B} \cup \mathcal{H},e)=true \quad$ if $\quad \mathcal{B} \cup \mathcal{H} \models e$
\end{center}

We can also define a covering function for a set of examples, which returns a a subset with the examples entailed by
$\mathcal{B} \cup \mathcal{H}$:

\begin{center}
 $covers(\mathcal{B} \cup \mathcal{H},\mathcal{E})=\{e \in \mathcal{E} | \mathcal{B} \cup \mathcal{H} \models e\}$
\end{center}

Therewith, we can define two important concepts in inductive learning:

\begin{itemize}
 \item Completeness: A hypothesis $\mathcal{H}$ is complete with respect to background knowledge $\mathcal{B}$ and
examples $\mathcal{E}$ if all the positive examples are covered, or in other words:
$covers(\mathcal{B},\mathcal{H},\mathcal{E}^{+})=\mathcal{E}^{+}$
 \item Consistency: A hypothesis $\mathcal{H}$ is consistent with respect to background knowledge $\mathcal{B}$ and
examples $\mathcal{E}$ if no negative examples are covered, or in other words:
$covers(\mathcal{B},\mathcal{H},\mathcal{E}^{-})=\emptyset$
\end{itemize}

The task of learning a concept requires a hypothesis which is complete and consistent with respect to the given
training examples and background knowledge. The example shown in Table~\ref{tab:ilpExample} illustrates a simple
problem of learning a hypothesis for the target relation
$daughter(X,Y)$.

\begin{table}[h!]
\caption{A simple ILP problem: learning the \emph{daughter} relation \citep{DBLP:journals/ml/LavracD96} .}
  \begin{center}
      \begin{tabular}{ r | l }
      \toprule
      \textbf{Training Examples} & \textbf{Background Knowledge}\\
      \midrule
      daughter(mary,ann) $\oplus$	& parent(ann,mary).	\\
      daughter(eve,tom) $\oplus$	& parent(ann,tom).	\\
      daughter(tom,ann) $\ominus$ 	& parent(tom,eve).	\\
      daughter(eve,ann) $\ominus$	& parent(tom,ian).	\\
					& female(ann).		\\
					& female(mary).		\\
					& female(eve).		\\
      \bottomrule
      \end{tabular}
  \label{tab:ilpExample}
  \end{center}
\end{table}

If we consider, for example, the language of safe Datalog clauses, it is possible to formulate the following complete
and consistent hypothesis:

\begin{center}
  $\mathcal{H} = daughter(X,Y)$ :- $female(X),parent(Y,X)$ 
\end{center}

\subsection{Searching the Hypothesis Space}

\begin{comment}
In ILP, the hypothesis space is determined by the language of the programs $\mathcal{L}$ consisting of the possible
program clauses allowed by the language. Also, the vocabulary of predicate symbols is determined by the predicates
from the background knowledge $\mathcal{B}$.
\end{comment}

\citet{DBLP:journals/ml/LavracD96} structures the search space with partial ordering of clauses based on
$\theta$-subsumption, in order to systematically search the program clauses space. A clause $c$ $\theta$-subsumes $c'$
if there's a substitution $\theta$ such that $c\theta \subseteq c'$. This also introduces a notion of generality,
where clause $c$ is at least as generally as $c'$, or in other words, $c'$ is a specialization of $c$.

For example, if we have the following $c$ and $c'$:
\begin{align*}
  c &= daughter(X,Y) \text{ :- } parent(Y,X) \\
  c'&= daughter(X,Y )\text{ :- } female(X),parent(Y,X)
\end{align*}

we know that for that $c'$ is a specialization of $c$ because for $\theta=\emptyset$, $c \subset c'$:
\begin{center}
$\{daughter(X,Y),\neg parent(Y,X)\} \subset \{daughter(X,Y),\neg female(X),\neg parent(Y,X)\}$ 
\end{center}

to better illustrate it, if we have the following $c$ and $c'$:
\begin{align*}
  c &= livesIn(X,Y) \text{ :- } marriedTo(X,Z),livesIn(Z,Y)\\
  c'&=livesIn(X,germany) \text{ :- } marriedTo(X,Z),livesIn(Z,germany)
\end{align*}

we know that $c'$ is a specialization of $c$ because with the substitution $\theta=\{Y/germany\}$, $c\theta=c'$.

This notion gives us an important property that can be used for pruning parts of the search space:

\begin{itemize}
 \item When generalizing from $c$ to $c'$, all the examples covered by $c$ are also covered by $c'$. So if $c$ is
inconsistent, then all its generalizations are inconsistent as well.
  \item When specializing from $c$ to $c'$, an example not covered by $c$ will neither be covered by $c'$. So if $c'$
does not cover any positive examples, neither does any of its specializations.
\end{itemize}

These properties are the basis for two main search approaches:

\begin{itemize}
 \item Bottom-up: starts with less general clauses and searches least general generalizations
 \item Top-down: starts with more general clauses and searches for most general consistent specializations
\end{itemize}

As in this thesis we work only with the top-down approach, we discuss it in more details in the following Section.
Further information about bottow-up approach can be found in \citet{DBLP:journals/ml/LavracD96}.

\subsection{Top-Down ILP}

The search space of program clauses can be represented as a lattice, structured by $\theta$-subsumption generality
ordering. This lattice is called a \emph{refinement graph}, which can be used to direct the search from the most general
to the most specific hypotheses. Figure~\ref{fig:refinementGraph} illustrates a part of the refinement graph for the
family relation example shown in Table~\ref{tab:ilpExample}.

\begin{figure}[h!]
\begin{center}
  \includegraphics[width=0.7\linewidth]{./Figures/refinementGraph.png}
\end{center}
\caption{Part of the refinement graph for the family relations problem\citep{DBLP:journals/ml/LavracD96}.}
\label{fig:refinementGraph}
\end{figure}

In such a graph, nodes are program clauses and arcs are refinement operations, which can be of two kinds: apply
a substitution on the clause; or add a literal to the body of the clause.

The top-down algorithm consists basically of a specialization loop embedded inside a covering loop. The first refines a
clause ensuring consistency, and the latter adds clauses to the hypothesis ensuring completeness. Each loop has a
stopping criterion, the covering loop iterates until it satisfies a sufficiency criterion, and the specialization loop
iterates until it satisfies a necessity criterion. The Algorithm~\ref{alg:topDownILP} shows in more details how the
generic
top-down ILP works.

\begin{algorithm}[!h]
  \caption{Generic top-down specialization ILP algorithm \citep{DBLP:journals/ml/LavracD96}.}
  \SetKwFunction{executeQuery}
  \KwIn{\textbf{Input:} $\mathcal{E}$: Training examples, $\mathcal{B}$: Background knowledge \\ }
  \KwResult{$\mathcal{H}$: Learned hypothesis}

  \tcp{Initialize the current training set and hypothesis}
  $\mathcal{E}_{cur} \leftarrow \mathcal{E}$ \;
  $\mathcal{H} \leftarrow \emptyset$ \;
  \Repeat(Covering Loop){Sufficiency Stopping Criterion is satisfied} {
    \tcp{Initialize clause with empty body}
    $c \leftarrow T$ :- $\emptyset$ \;
    \Repeat(Specialization Loop){Necessity Stopping Criterion is satisfied} {
      $c_{best} \leftarrow$ Best refinement of $c$;
      $c \leftarrow c_{best}$ \;
    } ()
    \tcp{Add clause to the hypothesis}
    $\mathcal{H} \leftarrow \mathcal{H} \cup \{c\}$ \;
    \tcp{Remove positive examples covered by the new hypothesis}
    $\mathcal{E}_{cur} \leftarrow \mathcal{E}_{cur} - covers(\mathcal{B} \cup \mathcal{H},\mathcal{E}_{cur}^{+})$ \;
  } ()
 \label{alg:topDownILP}
\end{algorithm}

In domains with perfect data, the stopping criteria require that all positive examples are covered (completeness), and
no negative examples are covered (consistency). However, in practice, many datasets have imperfect data. Such
imperfection is usually of the following kinds, as described in \citet{DBLP:conf/aii/LavracD92}:

\begin{itemize}
  \item Noise: random errors in the training examples and background knowledge.
  \item Insufficiently covered example space: too sparse training examples from which it is difficult to reliably
detect correlations.
  \item Inexactness: inappropriate or insufficient description language which does not contain an exact description of
the target concept.
  \item Missing facts in the training example.
\end{itemize}

In order to avoid the effects of imperfect data, ILP can use heuristics as stopping criteria that tolerate
some level of incompleteness and inconsistency. The simplest heuristic is the expected accuracy of a clause, which is
defined as the probability that an example covered by the clause is labeled as positive\citep{DBLP:conf/aii/LavracD92}:

\begin{equation}
A(c)=P(e \in \mathcal{E}^{+}|c)=\cfrac{n^{+}(c)}{n^{+}(c)+n^{-}(c)} 
\end{equation}
where $n^{+}(c)$ is the number of positive and $n^{-}(c)$ the number of negative examples covered by $c$. 

\subsection{ILP Complexity}

As shown in \citet{DBLP:journals/ngc/GottlobLS99}, the complexity of ILP resides at the second level of polynomial
hierarchy, being $\Sigma_2^P$-complete. $\Sigma_2^P$ means that it has complexity $NP^{NP}$, and that happens
because there are two interlaced sources of \emph{NP} complexity class:
\begin{itemize}
 \item The choice problem, which is in the \emph{NP} class.
 \item The checking problem, which is in the \emph{co-NP} class.
\end{itemize}

The choice problem, in ILP, is the problem of searching in the hypothesis space, while the checking problem is the
problem of testing a given hypothesis. Since ILP has a great complexity, it is appropriate to employ various techniques
which aim at the reduction of runtime.

One popular strategy is the limitation of the number of literals allowed in a clause, which limits the number of
levels in the refinement graph. This is easy to implement and can drastically reduce the size of the hypothesis search
space, nevertheless, it may also reduce the quality of the learned hypothesis. Another popular strategy is data
sampling. When well applied, this can be a very good alternative, however, with semantic data, it can be very tricky to
make a good sample without significant loss of information. There are several different sampling techniques for semantic
data, however, we do not discuss them since this problem is out of the scope of this thesis.

%\section{First Order Inductive Learning}
%\cite{DBLP:journals/ml/Quinlan9}
%\cite{DBLP:conf/ecml/QuinlanC93}

\section{Association Rule Mining}
\label{sec:rw-arm}

Association Rule Mining, as described in~\citet{Agrawal:1993:MAR:170036.170072}, is a method for discovering interesting
relations between variables in large databases. It is intended to work on a database composed by a set of transactions
$\mathcal{D}=\{t_1,t_2,\ldots,t_m\}$ and a set of items $\mathcal{I}=\{i_1,i_2,\ldots,i_n\}$. Each transaction
$\mathcal{T} \subseteq \mathcal{I}$ consists of a set of items, which is usually represented by a binary vector of size
$n$ indicating the presence of absence of items in the transaction.

The objective of association rule mining is to learn inference rules of the form $X \Rightarrow Y$, where $X,Y
\subseteq I$ and $X \cap Y = \emptyset$. Rules are selected according to various interestingness measures that are
subsequently discussed.

\subsection{Measures of Interest}

In this Section, we discuss measures used for selecting interesting rules in the mining process. As defined
in~\citet{Agrawal:1993:MAR:170036.170072}, a transaction $\mathcal{T}$ supports an itemset $\mathcal{X} \subseteq
\mathcal{I}$ if $\mathcal{X} \subseteq \mathcal{T}$. The support measure $supp(X)$ is the defined as the ratio between
the number of transactions supporting $X$ and the total size of the database $\mathcal{D}$:

\begin{equation}
 supp(X)=\cfrac{|\{ \mathcal{T} \in \mathcal{D} | X \subseteq \mathcal{T} \}|}{|\mathcal{D}|}
\end{equation}

The support of a rule $X \Rightarrow Y$ is defined as:

\begin{equation}
 supp(X \Rightarrow Y)=supp(X \cup Y)
\end{equation}

The confidence of a rule $X \Rightarrow Y$, which can also be interpreted as the probability of the head given the
body $P(Y|X)$, is defined as:

\begin{equation}
 conf(X \Rightarrow Y)=\cfrac{supp(X \cup Y)}{sup(X)}
\end{equation}

%Need to cite who invented lift
Confidence and support are the two measures used in classical association rule mining. Though, there are other
relevant measures such as \emph{lift}, which measures how different the observed rule support is in comparison to that
expected if we assume independence between $X$ and $Y$:

\begin{equation}
 lift(X \Rightarrow Y)=\cfrac{supp(X \Rightarrow Y)}{supp(X)supp(Y)}
\end{equation}

A lift value 1 would imply that $X$ and $Y$ are independent, therefore a rule involving both itemsets is not relevant.
A lift value greater than 1, provides information about the level of dependence between both variables. Higher
lift values means that a rule is potentially more interesting.

\subsection{Anti-monotonicity of Support}

An important characteristic of the support measure is the anti-monotonicity. It means that for any itemset $X
\subseteq \mathcal{I}$, if we have another itemset $Y \subseteq \mathcal{I}$ such that $X \subseteq Y$, then the
support
of $Y$ cannot be greater than the support of $X$. In other words, all subsets from a frequent itemset are also
frequent,
and all supersets from an infrequent item are also infrequent.

When searching for frequent itemsets, this characteristic plays a fundamental role in pruning the search space. The
itemset search space grows exponentially with the number of items $n$, since the set of possible itemsets is the power
set over $\mathcal{I}$, which has size $2^n-1$. As association rules are usually required to satisfy a specified minimum
support, if we know that a given itemset does not satisfy it, we can automatically prune all its supersets.

\subsection{Itemset Lattice}

The problem of searching frequent itemsets (FI), i.e., itemsets that satisfy the specified minimum support, can be
structured in a lattice. Firstly, an itemset is frequent if it has support greater or equal than the specified minimum
support.

There are two special kinds of itemsets:
\begin{itemize}
 \item Maximal frequent itemset (MFI): a frequent itemset is maximal if none of its immediate supersets are frequent.
 \item Closed frequent itemset (CFI): a frequent itemset is closed if all of its immediate supersets have lower
support.
\end{itemize}

Both maximal and frequent itemsets have special properties. By knowing all the maximal itemsets of a database, we
can deduce all the frequent itemsets. Nevertheless, it is not possible to know the support of each of the frequent
itemsets. By knowing all the closed itemsets of a database, it is possible to deduce not only all the frequent itemsets,
but also their supports.

Figure~\ref{fig:itemsetLattice} shows the resulting itemset lattice for the database $\mathcal{D}$
from Table~\ref{tab:itemsetLattice}. There, frequent itemsets are shown with thicker border. The minimum support
threshold is set to $2/5$, so the frequent itemsets are those with frequency greater than or equal to 2. In this
example, $\{ABCE\}$ is the maximal itemset, and $\{ABCE, BCE, BE\}$ are the closed itemsets 

\begin{table}[h!]
  \begin{center}
      \begin{tabular}{ c | l l l l }
      \toprule
      \textbf{TID} & \multicolumn{4}{c}{\textbf{Items}} \\
      \midrule
	1	& A & C & D & \\
	2 	& B & C & E & \\
	3	& A & B & C & E \\
	4 	& B & E &   & \\
	5	& A & B & C & E \\
      \bottomrule
      \end{tabular}
  \caption{Example of transaction database $\mathcal{D}$ \citep{Pasquier99efficientmining}}
  \label{tab:itemsetLattice}
 \end{center}
\end{table}

\begin{figure}[!h]
  \caption{Itemset Lattice example~\citep{Pasquier99efficientmining}.}
  \centering
  \begin{tikzpicture}[scale=0.75,auto=center,every node/.style={draw=black, font=\tiny}]
    \node (n0) at (11,15) {$\emptyset$};

    \node (A)[very thick] 	at (5,12)  {$A^{(3)}$};
    \node (B)[very thick] 	at (8,12)  {$B^{(4)}$};
    \node (C)[very thick]	at (11,12) {$C^{(4)}$};
    \node (D)			at (14,12) {$D^{(1)}$};
    \node (E)[very thick]	at (17,12) {$E^{(4)}$};

    \node (AB)[very thick] 	at (2,9)  {$AB^{(2)}$};
    \node (AC)[very thick] 	at (4,9)  {$AC^{(3)}$};
    \node (AD) 			at (6,9)  {$AD^{(1)}$};
    \node (AE)[very thick] 	at (8,9)  {$AE^{(2)}$};
    \node (BC)[very thick] 	at (10,9) {$BC^{(3)}$};
    \node (BD) 			at (12,9) {$BD^{(0)}$};
    \node (BE)[very thick] 	at (14,9) {$BE^{(4)}$};
    \node (CD) 			at (16,9) {$CD^{(1)}$};
    \node (CE)[very thick] 	at (18,9) {$CE^{(3)}$};
    \node (DE) 			at (20,9) {$DE^{(0)}$};

    \node (ABC)[very thick] 	at (2,6)  {$ABC^{(2)}$};
    \node (ABD) 		at (4,6)  {$ABD^{(0)}$};
    \node (ABE)[very thick] 	at (6,6)  {$ABE^{(2)}$};
    \node (ACD) 		at (8,6)  {$ACD^{(1)}$};
    \node (ACE)[very thick] 	at (10,6) {$ACE^{(2)}$};
    \node (ADE) 		at (12,6) {$ADE^{(0)}$};
    \node (BCD) 		at (14,6) {$BCD^{(0)}$};
    \node (BCE)[very thick] 	at (16,6) {$BCE^{(3)}$};
    \node (BDE) 		at (18,6) {$BDE^{(0)}$};
    \node (CDE) 		at (20,6) {$CDE^{(0)}$};

    \node (ABCD) 		at (5,3)  {$ABCD^{(0)}$};
    \node (ABCE)[very thick] 	at (8,3)  {$ABCE^{(2)}$};
    \node (ABDE) 		at (11,3) {$ABDE^{(0)}$};
    \node (ACDE) 		at (14,3) {$ACDE^{(0)}$};
    \node (BCDE) 		at (17,3) {$BCDE^{(0)}$};

    \node (ABCDE) at (11,0) {$ABCDE^{(0)}$};

    \foreach \from/\to in
      {n0/A,n0/B,n0/C,n0/D,n0/E,
       A/AB, A/AC, A/AD, A/AE, 
       B/AB, B/BC, B/BD, B/BE, 
       C/AC, C/BC, C/CD, C/CE,
       D/AD, D/BD, D/CD, D/DE,
       E/AE, E/BE, E/CE, E/DE,
       AB/ABC, AB/ABD, AB/ABE,
       AC/ABC, AC/ACD, AC/ACE,
       AD/ABD, AD/ACD, AD/ADE,
       AE/ABE, AE/ACE, AE/ADE,
       BC/ABC, BC/BCD, BC/BCE,
       BD/ABD, BD/BCD, BD/BDE,
       BE/ABE, BE/BCE, BE/BDE,
       CD/ACD, CD/BCD, CD/CDE,
       CE/ACE, CE/BCE, CE/CDE,
       DE/ADE, DE/BDE, DE/CDE,
       ABCD/ABC, ABCD/ABD, ABCD/ACD, ABCD/BCD,
       ABCE/ABC, ABCE/ABE, ABCE/ACE, ABCE/BCE,
       ABDE/ABD, ABDE/ABE, ABDE/ADE, ABDE/BDE,
       ACDE/ACD, ACDE/ACE, ACDE/ADE, ACDE/CDE,
       BCDE/BCD, BCDE/BCE, BCDE/BDE, BCDE/CDE,
       ABCDE/ABCD, ABCDE/ABCE, ABCDE/ABDE, ABCDE/ACDE, ABCDE/BCDE}  
    \draw (\from) -- (\to);
  \end{tikzpicture}
  \label{fig:itemsetLattice}
\end{figure}

\subsection{Apriori Algorithm}
The apriori algorithm, discussed in~\citet{Agrawal:1994:FAM:645920.672836}, is a method for searching frequent itemsets
which explores the anti-monotonic property of the support measure. It computes the support of the itemsets iteratively,
by ascending order of size. The process takes $k$ iterations, where $k$ is the size of the largest frequent itemset. In
each iteration $i \leq k$, the database is scanned once, and the supports from all the itemsets of size $i$ are
computed.

In the first iteration, all the itemsets of size 1 have their supports computed. On the $1 < i \leq k$ subsequent
iterations, a set of candidates $C_i$ is created by joining the frequent itemsets of size $i-1$ found in the previous
iteration. Two itemsets of size $i$ can only join if they share $i-1$ items, so that their union results in an itemset
of size $i+1$. Only after finding the set of candidate itemsets $C_i$, the database is scanned in order to determine the
support of each of them. Algorithm~\ref{alg:apriori} shows in more details how the apriori algorithm works.

\begin{algorithm}[h!]
  \caption{Apriori frequent itemset discovery~\citep{Pasquier99efficientmining}.}
  \SetKwFunction{aprioriGen}
  \KwIn{\textbf{Input:} $\mathcal{D}$: Database of transactions \\}
  \KwResult{$\mathcal{H}$: Learned hypothesis}

  $L_1 \leftarrow$ Frequent itemsets of size 1 \;
  $L \leftarrow L_1$ \;
  $k \leftarrow 1$ \;
  \While{$L_{k-1} \neq \emptyset$}{
      $C_k \leftarrow$ \FuncSty{aprioriGen(}$L_{k-1}$\FuncSty{)} \;
      \ForAll{$t \in \mathcal{D}$}{
	$C_t \leftarrow \{ c | c \in C_k \wedge c \subseteq t\}$ \;
	\ForAll{$c \in C_t$}{
	  $c.frequency \leftarrow c.frequency + 1$ \;
	}
      }
      $L_k \leftarrow \{ c | c \in C_k \wedge c.frequency \geq minSupport\}$ \;
      $L \leftarrow L \cup L_k$ \;
      $k \leftarrow k+1$ \;
  }
  \Return{$L$} \;
 \label{alg:apriori}
\end{algorithm}

Figure~\ref{fig:aprioriLattice} shows the itemset lattice generated by the apriori algorithm on the database
example from Table~\ref{tab:itemsetLattice}.

\begin{figure}[h!]
  \caption{Itemset Lattice generated by apriori algorithm for the example from Table~\ref{tab:itemsetLattice}}
  
  \centering
  \begin{tikzpicture}[scale=0.75,auto=center,every node/.style={draw=black, font=\tiny}]
    \node (n0) at (11,15) {$\emptyset$};

    \node (A)[very thick] 	at (5,12)  {$A^{(3)}$};
    \node (B)[very thick] 	at (8,12)  {$B^{(4)}$};
    \node (C)[very thick]	at (11,12) {$C^{(4)}$};
    \node (D)			at (14,12) {$D^{(1)}$};
    \node (E)[very thick]	at (17,12) {$E^{(4)}$};

    \node (AB)[very thick] 	at (2,9)  {$AB^{(2)}$};
    \node (AC)[very thick] 	at (4,9)  {$AC^{(3)}$};
    \node (AE)[very thick] 	at (8,9)  {$AE^{(2)}$};
    \node (BC)[very thick] 	at (10,9) {$BC^{(3)}$};
    \node (BE)[very thick] 	at (14,9) {$BE^{(4)}$};
    \node (CE)[very thick] 	at (18,9) {$CE^{(3)}$};

    \node (ABC)[very thick] 	at (2,6)  {$ABC^{(2)}$};
    \node (ABE)[very thick] 	at (6,6)  {$ABE^{(2)}$};
    \node (ACE)[very thick] 	at (10,6) {$ACE^{(2)}$};
    \node (BCE)[very thick] 	at (16,6) {$BCE^{(3)}$};

    \node (ABCE)[very thick] 	at (8,3)  {$ABCE^{(2)}$};

    \foreach \from/\to in
      {n0/A,n0/B,n0/C,n0/D,n0/E,
       A/AB, A/AC, A/AE, 
       B/AB, B/BC, B/BE, 
       C/AC, C/BC, C/CE,
       E/AE, E/BE, E/CE,
       AB/ABC, AB/ABE,
       AC/ABC, AC/ACE,
       AE/ABE, AE/ACE,
       BC/ABC, BC/BCE,
       BE/ABE, BE/BCE,
       CE/ACE, CE/BCE,
       ABCE/ABC, ABCE/ABE, ABCE/BCE}  
    \draw (\from) -- (\to);
  \end{tikzpicture}
  \label{fig:aprioriLattice}
\end{figure}

As we show later in Section~\ref{sec:correlation-lattice}, the itemset lattice and the apriori algorithm inspired
and served as base for our proposed correlation lattice. Therefore, both structures are notably similar, and share
similarities in their building processes.

\section{Discretization of Numerical Attributes}
\label{sec:rw-discretization}

Discretization of continuous features is a very important tool which is frequently used in statistics, data mining
and machine learning. It is a preprocessing technique which is used for reducing the effects of observation errors.
It is basically a form of quantization where the original data values in a given interval are replaced by a
representative value of that interval.

The discretization techniques can be divided in supervised and unsupervised methods. The first requires the objects to
be labeled with classes, having the domain discretized with the objective of reducing the class entropies, while the
second one does not consider object classes, only object frequencies. Unsupervised methods normally require the number
of buckets to be defined beforehand.

Various different supervised and unsupervised methods for the discretization of continuous features are discussed in
\citet{Dougherty95supervisedand}. To build the lattice, we do not consider labels for the examples, therefore we use
unsupervised methods. The discretization of numerical attributes plays a key role in this thesis, as it allows us
to consistently compare how different populations are distributed along a given attribute.

Supervised methods can be used for finding interesting intervals in the rules, by considering positive and negative
examples as labels. However, as it is out of the scope of this thesis, we do not discuss them.

\subsection{Equal Width}
This unsupervised discretization method consists of simply getting both the maximum and minimum values of the root's
numerical attribute, and splitting this interval into $k$ buckets of same width $w$:

\begin{equation}
 w=\cfrac{max-min}{k}
\end{equation}

It's the simplest and most straightforward discretization method, however, its main problem is that it is highly
sensitive to outliers, which may dramatically skew the resulting distribution.

\subsection{Equal Frequencies}

In this unsupervised discretization method, the domain is divided into buckets of equal frequency. That is,
given a total frequency $m$, the domain is discretized in $k$ buckets such that each of them has frequency $m/k$. 

Multiple objects with the same numerical attribute value might make it impossible to find bucket boundaries that equally
distribute the frequencies. In this case, the boundaries are chosen in a way that the resulting distribution is as close
as possible to uniformity, maximizing its entropy.

\section{Mining Optimized Rules for Numeric Attributes}
\label{sec:rw-miningOptimizedRules}

In \citet{Brin99miningoptimized}, a technique for efficiently learning association rules with optimized intervals for
numerical attributes is presented. It focuses on finding numerical intervals which optimize a specific interestingness
measure, given an association rule containing a numerical attribute, and the rule's support and confidence distribution
along this attribute.

These rules have the form $(A_1 \in [l_1,u_1]) \wedge C_1 \Rightarrow C_2$, where
$A_1$ is an uninstantiated numerical attribute, $l_1$ and $u_1$ are the lower and upper boundaries of $A_1$, and $C_1$
and $C_2$ are instantiated conditions.

The authors propose an algorithm which determines the values for the boundaries $l_1$ and $u_1$ for the following cases:

\begin{itemize}
 \item Optimized confidence: the rule confidence is maximized, and the support of the condition $(A_1 \in [l_1,u_1])
\wedge
C_1$ is at least the specified minimum support.
  \item Optimized support: the support of the condition $(A_1 \in [l_1,u_1]) \wedge C_1$ is maximized, and the rule
confidence is at least the specified minimum confidence.
  \item Optimized gain: the rule gain is maximized and the confidence is at least the specified minimum confidence.
\citet{Brin99miningoptimized} defines gain of a rule $r$ as $gain(r)= supp(r)*(conf(r)-minConf)$, where $minConf$ is the
confidence threshold.
\end{itemize}

In~\citet{Brin99miningoptimized}, the two dimensional case with two different numerical attributes is also discussed,
however, in this Section we focus on the single numerical attribute case.

Basically, it first takes the transactions with the numerical attribute $A_1$, and discretizes its numerical domain of
size
$n$ into $b < n$ buckets, in order to reduce the complexity of the algorithm. It uses a supervised bucketing algorithm,
which uses the labels ``$+$'' if the confidence value is greater than the \emph{minConf} (positive gain), and ``$-$'',
if it is less (negative gain). This method does not compromise the optimality of the algorithm, since it collapses
values with similar labels together, generating $b$ buckets of equally labeled objects, each with its own
support, confidence, and gain values. Figure~\ref{fig:brinbucket} shows an example where a domain of size $n=6$ is
discretized into $b=3$ buckets following this method.

\begin{figure}
\begin{center}
  \includegraphics[width=0.5\linewidth]{./Figures/brin-bucket.png}
\end{center}
\caption{Example of buckets generated~\citep{Brin99miningoptimized}}
\label{fig:brinbucket}
\end{figure}

Algorithm~\ref{alg:brin} shows how the proposed one-dimensional algorithm works for gain optimization.
Figure~\ref{fig:optgain} illustrates its execution steps with an example with $b=6$ buckets, and confidence values
$10$, $-15$, $20$, $-15$, $20$, and $-15$. \emph{PSet} is the optimized gain set, and \emph{NSet} contains the remaining
intervals not stored in \emph{PSet}. These sets are represented before and after the iterations $i=1$
and $i=2$. As shown in the figure, after every iteration, the optimized intervals contained in $PSet$ have positive
gain, and their neighboring intervals have negative gain.

\begin{algorithm}[h!]
 \caption{Algorithm for computing optimized gain set~\citep{Brin99miningoptimized}}
 \label{alg:brin}
 \KwIn{\textbf{Input:} $b$: number of buckets, $k$: maximum number of optimum intervals}
  $PSet \leftarrow \emptyset$\;
  $NSet \leftarrow \{[1,b]\}$\;
  \For{$i \leftarrow 1$ to $k$} {
      Let $P_q$ be the interval in $PSet$ with the smallest value for $gain(min(P_q))$\;
      Let $N_q$ be the interval in $NSet$ with the smallest value for $gain(max(N_q))$\;
      \eIf{$gain(min(P_q))+gain(max(N_q)) < 0$}{
	  Delete $P_q$ from $PSet$\;
	  Split $P_q$ into three sub-intervals (with $min(P_q)$ as the middle interval)\;
	  Insert the first and third intervals to $PSet$ and second interval to $NSet$\;
      }{
	  Delete $N_q$ from $NSet$;
	  Split $N_q$ into three sub-intervals (with $max(N_q)$ as the middle interval)\;
	  Insert the first and third intervals to $NSet$ and second interval to $PSet$\;
      }
  }
  \Return $PSet$\;
\end{algorithm}

\begin{figure}
\begin{center}
  \includegraphics[width=1\linewidth]{./Figures/optgain1d.png}
\end{center}
\caption{Execution trace of procedure shown in Algorithm~\ref{alg:brin}. (a) Before first iteration, (b) after first
iteration, (c) before second iteration, and (d) after second iteration\citep{Brin99miningoptimized}.}
\label{fig:optgain}
\end{figure}

It is important to highlight that this technique does not have the same purpose of the work presented in this thesis.
While \citet{Brin99miningoptimized} focuses on finding the optimal interval with respect to a given measure, assuming
that we already have support and confidence distributions along the numerical attribute, in this thesis, we focus on
predicting the existence of these interesting intervals before actually querying the support and confidence
distributions.

\section{Information-Theoretic Measures}
\label{sec:rw-infotheoreticmeasures}

Information-theoretic measures are widely used in various different learning processes. Its importance lies on its
power to measure the information contained in attributes and relationships between them. An attribute is
considered important in data mining if regularities are encountered in smaller populations obtained based on the values
of such attribute, but not in the larger population. Such regularities are identifiable by lower entropy values, with
interesting attributes causing entropy reduction. 

\begin{comment}
More specifically, this entropy reduction is the difference between
the entropy of the decision attribute and the conditional entropy of the decision attribute given a particular
attribute, whose interestingness we want to investigate.
\end{comment}

Let's say we have $X$ as decision attribute and it divides the set of objects $\mathcal{U}$ into a group of disjoint
subsets defined by the value $x \in \mathcal{X}$ of $X$. Also, let's define $I_X : \mathcal{U} \rightarrow \mathcal{X}$
as an information function that gives the value of
attribute $X$ for an object $t \in \mathcal{U}$. The set of objects $m(X=x) \subseteq \mathcal{U}$, whose value for
$X$
is $x$ is defined as follows:

\begin{equation}
 m(X=x)=m(x)=\{t \in \mathcal{U} | I_X(t)=x\}
\end{equation}

then the probability distribution of $X$ is defined by:

\begin{equation}
 P(X=x)=P(x)=\cfrac{|m(x)|}{|\mathcal{U}|},\quad x \in \mathcal{X}
\end{equation}

Information theoretic measures are defined over probability distributions. They can measure the importance of an
attribute, attribute association, dissimilarity, or similarity of populations. In the next Sections, we discuss some of
the most important measures.

\subsection{Shannon's Entropy}

The Shannon's entropy measure $H$ is a non-negative function which may be interpreted as a measure of the information
content of, or uncertainty about an attribute $X$. It is defined over a probability distribution of $X$:

\begin{equation}
\begin{split}
 H(P(X))=H(X)&=E_{P(X)}[-logP(X)] \\
 &=-\sum_{x \in \mathcal{X}}P(x)logP(x)
\end{split} 
\end{equation}

The entropy of $X$ is maximum if its probability distribution is uniform, and minimum if the whole population is
concentrated on a specific value $x_c \in \mathcal{X}$, i.e., $P(x_c)=1$ and $P(x)=0$ for $x \neq x_c$. The higher the
entropy, the higher the uncertainty about $X$'s attribute value, and an entropy of zero means complete certainty about
the value of $X$.

\subsection{Kullback-Leibler Divergence}
\label{sec:kldiv}

The Kullback-Leibler divergence~\citep{Kullback51klDivergence} $D(P||Q)$ between the population
distributions $P(X)$ and $Q(X)$ measures the degree of deviation between them:

\begin{equation}
\begin{split}
 D_{KL}(P||Q)&=E_{P(X)}\left[\cfrac{P(X)}{Q(X)}\right] \\
 &=\sum_{x \in \mathcal{X}}P(x)log\left(\cfrac{P(x)}{Q(x)}\right)
\end{split} 
\end{equation}

This divergence measure is a premetric but not a metric distance. It is non-negative and it becomes zero if
$P(x)=Q(x)$,
$\forall x \in \mathcal{X}$, but it does not satisfy the symmetry and the triangle inequality properties.

There are also symmetrized divergence measures based on the Kullback-Leibler, e.g. the $J$-divergence~\ref{eq:jDiv} and
the Jensen-Shannon divergence~\ref{eq:jensenShannon}:

\begin{equation} \label{eq:jDiv}
  D_{J}(P||Q) = D_{KL}(P||Q) + D_{KL}(Q||P)
\end{equation}
\begin{equation} \label{eq:jensenShannon}
  D_{JS}(P||Q) = \cfrac{1}{2}D_{KL}(P||M) + \cfrac{1}{2}D_{KL}(Q||M)
\end{equation}

where M is the average of the two distributions, $M=\cfrac{1}{2}(P+Q)$.

The Jensen-Shannon divergence has the advantage that it results in a always finite value ($0 \leq
D_{JS}(P||Q) \leq 1$, for the base-2 logarithm). Although, it still does not satisfy the triangle inequality property,
its square root does and, therefore, is a metric. Further details about these divergence measures are presented
in~\citet{17795}, \citet{Vinh:2010:ITM:1953011.1953024}, and~\citet{guiasu1977information}.

\subsection{Mutual Information}

Divergence measures can be used for computing the degree of independence between two attributes $X$ and $Y$. This
is done by simply measuring the divergence between the observed joint distribution $P(X,Y)$ and the independent
distribution formed by the marginals $P(X)P(Y)$. We call such measure mutual information:

\begin{equation} \label{eq:mutualInformation}
\begin{split}
 MI(X;Y)&=D(P(X,Y)||P(X)P(Y)) \\
 &=E_{P(X,Y)}\left[ log\cfrac{P(x,y)}{P(x)P(y)} \right] \\
 &=\sum_{x \in \mathcal{X}} \sum_{y \in \mathcal{Y}} P(x,y)log\cfrac{P(x,y)}{P(x)P(y)}
\end{split}
\end{equation}

Completely independent attributes have mutual information zero, and the greater the level of dependence, the greater
the mutual information.

\section{Semantic Web Applications}
\label{sec:rw-semanticWeb}

The Semantic Web is, according to its official website\footnote{\url{http://www.w3.org/standards/semanticweb/}}, a
technology created by the World Wide Web Consortium (W3C) to enable the so-called ``Web of data'', allowing computers
to do more useful work and develop systems that can support trusted interactions over the network. 

It consists of a set of technologies and formats which aim to make web resources more readily accessible to automated
processes. The Semantic Web Stack (Figure~\ref{fig:sematicWebLayer}\footnote{Semantic Web - XML2000, slide 10".
W3C: \url{http://www.w3.org/2000/Talks/1206-xml2k-tbl/slide10-0.html}}), illustrates the Semantic Web architecture
showing how its standards are organized and how it extends the classical hypertext web.

\begin{figure}
\begin{center}
  \includegraphics[width=0.5\linewidth]{./Figures/Semantic-web-stack.png}
\end{center}
\caption{Semantic Web stack}
\label{fig:sematicWebLayer}
\end{figure}

\begin{itemize}
 \item Resource Description Framework (RDF): a framework for creating statements in the form of triples composed by
subject, predicate and object. It uses URIs to name the predicates as well as the entities, which may be the subject
or
object in a triple. RDF can be encoded in a variety of formats, such as RDF/XML, N3, Turtle and N-Triples.
 \item RDF Schema (RDFS): a RDF vocabulary description language which provides basic elements for the description of
ontologies (or RDF vocabularies) intended to structure RDF resources known as constructs (RDFS classes, associated
properties, and utility properties).
 \item Web Ontology Language (OWL): a semantic markup language for publishing and sharing ontologies on the web. It is
a
vocabulary extension of RDF that allows an ontology to be described with a set of axioms which places constraints on
classes and the types of relationships permitted between them.
 \item SPARQL: an RDF query language for databases which can be used to express queries across diverse data sources.
It
consists of triple patterns, conjunctions, disjunctions and optional patterns. 
 \item Rule Interchange Format (RIF): defines a standard for exchanging rules among rule systems. It focuses on
exchange rather than defining a single one-fits-all rule language, different databases might have different
characteristics with different necessities.
  \item Semantic Web Rule Language (SWRL): a proposal for rule language combining sublanguages of OWL with the
Unary/Binary Datolog Rule Markup Language\footnote{\url{http://ruleml.org/}} (RuleML), which specifies different
derivation rules via XML Schema.
\end{itemize}

All these technologies are extremely important to this thesis. To illustrate that, all the knowledge bases used are
stored as RDF triples, we access them with SPARQL queries, we exploit OWL classes and properties to optimize our
learning algorithm, and we can express the learned Datalog rules in SWRL.

%ruleml.org/talks/RuleML-Family-PPSWR06-talk-up.ppt

%http://ruleml.org/1.0/exa/

\section{Linked Open Data Applications}
\label{sec:rw-lod}

The Semantic Web does not need only access to reachable and manageable data in a standard format, but also
relationships among data from different sources, which enables the creation of a huge collection of interrelated
datasets in the Web, also referred as Linked Data.

\begin{figure}[h!]
\begin{center}
  \includegraphics[width=1\linewidth]{./Figures/lod-datasets_2011-09-19.png}
\end{center}
\caption{Linking Open Data cloud diagram, by Richard Cyganiak and Anja Jentzsch. \url{http://lod-cloud.net}}
\label{fig:lod}
\end{figure}

Linked Data is a fundamental part of the Semantic Web essence, facilitating a large scale integration and reasoning
of data on the Web. A typical example of a Linked Data collection is DBpedia
{\footnote{\url{http://www.dbpedia.org/}}}, which basically extracts information from Wikipedia and
makes it available in RDF, also incorporating links to other datasets on the web, such as 
Geonames\footnote{\url{http://www.geonames.org/}},
YAGO\footnote{\url{http://www.mpi-inf.mpg.de/yago-naga/yago/}}, 
LinkedMDB\footnote{\url{http://www.linkedmdb.org/}} and 
USCensus\footnote{\url{http://www.linkedmdb.org/}}. 

This sort of linkage allows applications to exploit the extra information that these datasets can provide by integrating
facts from various different sources. Figure~\ref{fig:lod} shows the latest version of the Linked Open Data cloud
diagram, and Figure~\ref{fig:lodZoom} zooms to, and highlights some of the datasets used in this thesis.

\begin{figure}[h!]
\begin{center}
  \includegraphics[width=1\linewidth]{./Figures/lod-zoom.png}
\end{center}
\caption{Zoom of Figure~\ref{fig:lod} highlighting datasets relevant to this thesis}
\label{fig:lodZoom}
\end{figure}
