\section{Direct Matching}  

\subsection{From Semantic to Entropy-Based Relations} 


% \begin{enumerate}
 %\item Here we should show that all decisions are made based on data complexity measure. For instance, we just combine keys if the entropy of each key individually is not maximal. 
%\item We introduce the notion of Key=IFP
%\item We introduce the notion of direct matching
%\item We introduce the notion of class-based matching (SERIMI itself)
%\item We introduce a kernel model that combines the direct matching and class-based matching
%\end{enumerate}

OWL semantics, such as \texttt{owl:InverseFunctionalProperty} (\textit{owl:IFP} for short), are the main semantic component used to define as Same-as relation between two instances. For example, the semantics of the IFP relation  owl:IFP enforces that an object of a predicate should be unique, that is, if two distinct subjects have the same value of an \textit{owl:IFP} predicate, then the subjects are the same. This inference is the most common mechanism used to find possible matches among a set of distinct subjects, they are the base of any instance matching system. In this section, we present our approach to determined Same-as relation between two instances that approximate the semantics of \textit{owl:IFP} using only statistical properties of the data. Further, this approach will be used as the core of the direct matching and class-based matching.

\subsection{Semantic Relations}

Now, we formalize the notion of Same-as relation, IFP relation and CTP relation.

\begin{definition} The Same-as Relation - Let U be a set of URIs. The same-as relation, denoted by $\mathbb{S}$, is defined as the minimal reflexive, symmetric relation on U, satisfying that: (1) $\forall s \in U, \langle s,s \rangle \in \mathbb{S}$ ;(2) For $s, o \in U$, if there exists a triple $\langle s, owl:sameAs,o\rangle$ ,then $\langle s,o \rangle \in \mathbb{S}$ and $\langle o,s \rangle \in \mathbb{S}$
\end{definition} 

\begin{definition} The IFP relation - Let U be a set of URIs. The IFP relation, denoted by $\mathbb{I}$, is defined to be the minimal reflexive, symmetric relation on U, where (1) $\forall s \in U, \langle s,s \rangle \in \mathbb{I}$;(2) $\forall s1,s2 \in U$, if there are an IFP $p$ and two triples $\langle s1,p,o \rangle$, $\langle s2,p,o \rangle$ in G ,then $\langle s1,s2 \rangle \in \mathbb{I}$ and $\langle s2,s1 \rangle \in \mathbb{I}$.
\end{definition}

Approaches that use IFP relation to define Same-as relation\cite{DBLP:conf/aswc/NikolovUMR08} usually parse ontologies to find predicates where the \textit{rdf:type} is explicitly defined as \textit{owl:IFP}. A disadvantage of this approach is that this information can only be obtained when the ontology that defined the predicate is dereferenciable (i.e.\ available online for inspection). Another limitation of inference based on \textit{owl:IFP} is that in some cases there is no predicate defined as \textit{owl:IFP} in the data ontology, but still the instances may have a corresponding match in data. 

Moreover, the strict constraint imposed by the \textit{owl:IFP} semantic can only be guaranteed over a controlled source. Contrarily, this property is often violated in the multi source environment of the semantic web. For instances, a Geonames dataset may have a triple $\langle s1, p, "Rose" \rangle$ and DBPedia dataset may have a triple $\langle s2, p, "Rose" \rangle$, where $p$ is an \textit{owl:IFP} but $s1$ may mean a city and $s2$ a flower. In this case, $s1$ and $s2$ are not the same, even though they form an IFP relation. To overcome particularly this problem, a \textit{class type relation (CTP)}, defined by \textit{rdf:type}, can be used to ensure the Same-as relation among multi sources of data.

%\begin{definition} The FP relation - Let U be a set of URIs. The FP relation, denoted by $\mathbb{F}$, is defined to be the minimal reflexive, symmetric relation on U, where (1) $\forall o \in U, \langle o,o \rangle \in \mathbb{F}$;(2) $\forall o1,o2 \in U$, if there are an FP $p$ and two triples $\langle s,p,o1 \rangle$, $\langle s,p,o2 \rangle$ ,then $\langle o1,o2 \rangle \in \mathbb{F}$ and $\langle o2,o1 \rangle \in \mathbb{F}$.
%\end{definition}

\begin{definition} The CTP relation - Let U be a set of URIs. The CTP relation, denoted by $\mathbb{C}$, is defined to be the minimal reflexive, symmetric relation on U, where (1) $\forall o \in U, \langle o,o \rangle \in \mathbb{C}$;(2) $\forall s1,s2 \in U$, if there are  two triples $\langle s1,$ rdfs:type $,a \rangle$, $\langle s2,$ rdf:type $,b \rangle$ and $\langle a,b \rangle \in \mathbb{S}$,  ,then $\langle s1,s2 \rangle \in \mathbb{C}$ and $\langle s2,s1 \rangle \in \mathbb{C}$.
\end{definition}

In conclusion, we define:

\begin{definition} Match - In a multi source scenario, given two instances $s1 \in S$ and $s2 \in T$, $\langle s1,s2 \rangle \in \mathbb{S}$ if $\langle s1,s2 \rangle \in \mathbb{I}$ and $\langle s1,s2 \rangle \in \mathbb{C}$
\end{definition}

To overcome all these limitations, we propose to use entropy of the predicates to infer the Same-as relations in real world scenario where data is heterogeneous, imperfect, noise and multi sourced.  The model that we describe next is agnostic to the semantic definitions on data ontologies but captures the same meta-properties of the relations defined above.

\subsection{Entropy-Based Relations}

Given an predicate $p$, its object values $\{o_1,\ldots,o_i,\ldots,o_n\} \in O_p$, and $Pr(o_i)$ is the probability of observing $o_i$ as the object of the property $p$ in a graph $G$, then we define the entropy $H(p)$ of the predicate $p$, such as:

\begin{equation}
H(p) = - \sum_{i=1}^{n} Pr(o_i)log_2Pr(o_i).
\end{equation}

The entropy\footnote{Notice that the representation of o can be a set of bigrams, trigrams; i.e., the results of a function of its original value.} is maximal when $\forall o \in O_p, Pr(o)=\frac{1}{|O_p|}$.  Then we define.

\begin{definition} Entropy-Based IFP - Given a graph G, p is an IFP iff it H(p) is maximal.
\end{definition}

The entropy is zero when  $\forall o \in O_p, Pr(o)=1$. Generally, a property p with entropy zero has an unique value for any subjects. We can say that this property is a class type property (CTP). 

\begin{definition} Entropy-Based CTP - Given a graph G, p is CTP iff $H(p) = 0$.
\end{definition}

The main benefit of defining an IFP in this fashion is that we do not depend on formal ontology definitions to detect an IFP.  This notion of IFP depends on evidences of the distributions of the values of the predicates on the data. In the context of semantic web, it is more realistic because anything can be expressed in the data so that any formal definition can be violated. A second benefit is that it outlines a data complexity measure, i.e., we can determine how difficult is to match two dataset based on the entropy of their predicates, beforehand. For instances, data where there is no IFP is harder to match because their values are ambiguous, i.e.\, many instances may share the same name.  By knowing this in advance, we can select a proper approach to guarantee accuracy of the matches in such ambiguous sources.

Beside of these issues, we can also defined when two instances matches even when an IFP are not encountered in the data. For example, predicates with high entropy (not maximal) can still be used as a pseudo-IFP (PIFP).

\begin{definition}Pseudo-IFP (PIFP)- Given a graph G, a predicate $p$ is an PIFP iff $H(p) \geq H_{max}(p) - \omega$; where $\omega$ is a threshold.
\end{definition}

By combining many PIFP we can obtain a set of predicates that can uniquely identify an instance. This combination of predicates play a role of an IFP.  For example, in Table 1, the value of \textit{rdfs:label} cannot identify uniquely an instance, but the combination of \textit{rdfs:label} and \textit{geo:lat} is more discriminative and their values identify uniquely an instance.


\begin{definition} Combined IFP (CIFP) - Given a set $P' \subset P$, where P is the set of all predicates in $G$, the set $P'$ is a CIFP, if the entropy $H_c(P')$ is maximal:

\begin{equation}
H_c(P') = - \sum_{i=1}^{n} Pr(r_i)log_2Pr(r_i).
\end{equation}

where given $P'$ and its set of values $\{r_1,\ldots,r_i,\ldots,r_n\}$, where $r_i=\{\langle p, o \rangle |\exists s_i, \forall p \in P', \langle s_i, p, o \rangle \in G \}$. $Pr(r_i)$ is the probability of observing $r_i$ as a set of tuple for an instance $s_i$ in a graph $G$.

\end{definition}

In this work, the entropy measures were normalized between (0,1) where 0 represents the maximal entropy. 

In the next section, we explain how we use CIFP and CTP to infer Same-as relations.


% \begin{enumerate}
% \item  here we introduce the direct matching and the class-based matching
% \item we introduce the features
% \item we introduce the similarity function
% \item we introduce the direct matching problem.
% \item we introduce the class-based method itself
%  detecting outliers.
%\end{enumerate}

In this section, we describe our approach for instance matching that uses both the direct matching problem and class-based matching problem.  To avoid the $|S|\times|T|$ instances comparison, we start our process selecting a set of possible candidate matches $C(s)$ for each instance $s \in S$. After the candidate are obtained, we use both direct matching and class-based matching to define the Same-as relation between instances. The full process is described in detail next.

\subsection{Finding PIFP} 
In order to generate candidate sets, the first step is to find $PIFP$ in the set of source instances $S$, denoted as $PIFP_s$, and in the set of target instance $T$, denoted as $PIFP_t$. This predicates are later used to select candidates. We look for predicates with the lowest entropy (closer to the maximal that is zero).  Alg. \ref{alg:pifp} describes this process, where predicates with entropy lower than the average of all entropies where selected as PIFP. The use of the average as a threshold was tested empirically in this work.

\begin{algorithm}
\caption{FindPIFP(G). Find $PIFP$ in $G$.}

\begin{algorithmic}
\scriptsize\tt
\STATE $PIFP \leftarrow \emptyset$. 
\STATE $P \leftarrow$ the set of predicates in $G$. 
\FORALL{$p \in P$}
\STATE scores[p] $\leftarrow H(p)$
\ENDFOR
\STATE average $\leftarrow$ average(scores[p].values)
\FORALL{$p \in P$}
\IF{scores[p] $\leq$ average}
\STATE $PIFP$.add(p)
\ENDIF
\ENDFOR
\RETURN $PIFP$
\end{algorithmic}
\label{alg:pifp}
\end{algorithm}

\subsection{Mapping PIFP} 
To map PIFP, we borrowed ideas from instance-based schema matching, which derives predicate mappings based on the similarity between their values~\cite{DBLP:conf/fskd/FengHQ09,DBLP:conf/iceis/LemeCBF09}. Here, we map all PIFP that share some bigrams in their literal values. Additionally, we exploited the entropy of the predicates in this process as computed before. Intuitively, this entropy score can be seen as another similarity signal because large differences between the entropies of two predicates also indicate they represent different information. 

Alg. \ref{alg:alignment} shows the process of aligning predicates in $PIFP_s$ and $PIFP_t$ computed separately for $G_s$ and $G_t$, respectively, as discussed before. The result $PIFP_{st}$ consists of pairs of predicates as entries, one predicate in $PIFP_s$ and its corresponding predicate in $PIFP_t$. It starts by building a $|PIFP_s|\times|PIFP_t|$ matrix $V$, with every cell representing a pair of attributes $p_s \in PIFP_s$ and $p_t \in PIFP_t$ through two features $sim$ and $H$. The set similarity score $sim$ is defined over the sets of bigrams $Bi_{s}$ and $Bi_{t}$ extracted from all literal values in $p_s$ and $p_t$ respectively, as $\frac{|Bi_{s} \cap Bi_{t}|}{min(|Bi_{s}|,|Bi_{t}|)}$; $d$ is the difference between $H(p_s)$ and $H(p_t)$. 

These features form a two-dimensional feature space in which an predicate pair can be represented as a point. An agglomerative hierarchical clustering algorithm~\cite{DBLP:books/ph/JainD88} is then applied to all the points corresponding to cells in $V$. In particular, centroid linkage using the four centroids $[0,0], [1,0], [0,1]$ and $[1,1]$ are used to form four clusters. The one resulting cluster with the closest distance to $[0,0]$ contains points with higher bigram overlap and smaller difference in entropy, compared to points in the other three clusters. Only predicate pairs represented by points in this cluster are considered as alignment and used subsequently for finding the candidate instances. 



\begin{algorithm}

\caption{MapPIFP($PIFP_s$, $PIFP_t$). Maps pairs predicates from $PIFP_s$ in $G_s$ and $PIFP_t$ in $G_t$.}
\begin{algorithmic}
\scriptsize\tt
\STATE Create a $|PIFP_s|\times|PIFP_t|$ feature matrix, $V$; every cell $V_{st}$ is used to store two features $[sim,d]$, where $sim$ is the similarity score and $d$ the discriminability score between the attributes $p_s$ and $p_t$
\FORALL{$p_s \in |PIFP_s|$}
\FORALL{$p_t \in |PIFP_t|$}
\STATE $V_{st} \leftarrow$ [setsim(bigrams($p_s$), bigrams($p_t$)), Abs(H($p_s$) - H($p_t$))]
\ENDFOR
\ENDFOR
\STATE clusters $\leftarrow$ clustering($V$, centroids([0,0],[0,1],[1,0],[1,1]))
\STATE $PIFP_{st}$ $\leftarrow$ clusters.closestTo([0,0])
\RETURN $PIFP_{st}$
\end{algorithmic}
\label{alg:alignment}
\end{algorithm}
 
\subsection{Finding Candidates} 

To find the candidates, for each instance $s \in S$, we computed a set $C(s)$ which contains all target instances where the similarity $sim (Obj(s,p_s), Obj(t,p_t)) > \alpha$, where $\langle p_s, p_t \rangle$ is a pair in $PIFP_{st}$. As a similarity function $sim$ we used \textit{Jaccard} set similarity between the set of bigrams of the values of $p_s$ and $p_t$, where the threshold $\alpha$ was set to 0.6. 

\subsection{Approxiated Direct Matching} 

Once the candidate sets $C(s) \in C(S)$ is determined for each instance $s \in S$, we use the direct matching approach to find Same-as relations between $s$ and $t \in C(s)$. The exact computation of the source CIPF(or target CIFP) is quite expensive. In the worse case, it requires the computation of the entropy for $N!$ predicate combinations, where $N$ is the number of predicates in $G_s$ (or $G_t$). To avoid this combinatorial problem we simply consider the union of the PIFP (computed by the Alg. \ref{alg:pifp}) as CIFP.  Then given an source instance $s$ and a target instances $t$ and theirs respective $CIPF_s$ and $CIPF_t$, we computed the E.g. \ref{eq:sim}: $Sim(s,t)$. We use Jaccard as similarity function where we compared the set of bigrams $W_s$ and $W_t$ of the values of the predicate pair $\langle p_s, p_t \rangle\in CIFP_{st}$. Then, an instance $t$ is considered as a correct match for $s$ if $Sim(s,t) \geq \delta$ or its rank is within the top-K among all instances in $C(s)$. In this work, we also combined the $Sim(s,t)$ score with the class-based matching score, which will describe next, to decide whether $s$ and $t$ forms a Same-as relation or not.

\subsection{Combining Direct and Class-based Matching}
We evaluate the class-based algorithm combined with direct matching algorithm. In this case, each scores in  candidate sets is the scalar multiplication of individual score produced by each method.