
\section{Overview of the Approach}
\label{sec:overview}
In this section, we present an overview of SERIMI, our solution for instance matching. 


\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{overview.pdf}
\caption{The instance matching in SERIMI.} 
\label{fig:overview}
\end{figure} 


The process of instance matching performed by SERIMI is illustrated in Fig.\ \ref{fig:overview}. SERIMI focuses on the problem of \emph{instance matching across heterogeneous datasets}. 
%The inputs thus represent different datasets and the goal is to find matches between datasets. 
In particular, the inputs are conceived to be partitioned into two datasets, the source $S$ and target $T$. For every instance in $s \in S$, the goal is to find matching instances $t \in T$, i.e. $s$ and $t$ refer to the same real-world object. This matching is performed in two main steps, candidate selection and match refinement. 

\textbf{Candidate Selection.}  For each $s \in S$,  we firstly perform a low cost candidate selection step to obtain a candidate set $C(s) \subset T$. The set of all candidate sets is denoted as $C(S)=\{C(s) | s \in S \}$, and the union of all candidate instances is denoted as $C=\{t | s \in S : t \in C(s)\}$. This step reduces the number of comparisons needed to find matches between the source and target, i.e., from a maximum of $|S| \times |T|$ comparisons to $|S| \times |C|$.
%, where $C(s)^{max}$ is the set with the largest number of candidates among all candidate sets in $C(S)$. 

Existing, so called, blocking techniques~\cite{hernandez_merge/purge_1995,mccallum_efficient_2000,papadakis_efficient_2011} can be used to quickly select candidates. Typically, a predicate (a combination of predicates) that is useful in distinguishing instances is chosen, and its values are used as blocking keys. In this setting of cross-dataset matching, a predicate in the source is chosen (e.g. \verb+rdfs:label+) and its values (e.g. 'San Francisco') are used to find target candidate instances that have similar values in their predicates.
%In particular, we select attributes with entropy higher than a threshold. Given the attribute $a$ and its literal values $O(a) = \{o | (s,a,o) \in G\}$, let $p$ be the probability mass function of $a$, then the entropy of $a$ is $H(a)=-\sum_{o \in O(a)} p(o)log_2p(o)$.
Using the current example, the candidates matches for $S$ =\{\verb+nyt:2223+, \verb+nyt:5962+, \verb+nyt:5555+\} would be $C(\verb+nyt:2223+) =$ \{\verb+db:San_Francisco+\} , $C(\verb+nyt:5962+) =$ \{\verb+db:Belmont_+\verb+California+, \verb+db:Belmont_+ \verb+France+\} and $C(\verb+nyt:5555+) =$ \{\verb+db:San_Jose_+ \verb+California+,  \verb+db:San_Jose_+\verb+Costa_Rica+\}, these candidates were selected based on high (lexical) similarity with the value of the  \verb+rdfs:label+  predicate of the source instances.

% in all sets the candidates share the same value of the predicate  \verb+rdfs:label+  with the source instance. 



To generate candidates in this work, we use simple boolean matching: we construct boolean queries using tokens extracted from candidate labels. 
Standard pre-processing is applied to lowercase the tokens and to remove stop words.  
%, where the tokens of the source labels where  keywords in the queries. 
These queries retrieve candidates, which have values that share at least one token with the values of the corresponding source instance. This method is primarily geared towards quickly finding all matches, i.e. high recall, but may produce many incorrect candidates. Higher precision can be achieved using other techniques known in literature~\cite{DBLP:journals/pvldb/ArasuCK09}. 
 


\textbf{Direct Matching.} After the candidates have been determined, a more refined matching step is performed to find correct matches, $M(s) \subseteq C(s)$. For this, it is applied state-of-the-art approaches that perform more complex \textit{direct matching}. Usually, instead of a simple blocking key, they use a combination of weighted similarity functions defined over several predicate values \cite{DBLP:journals/pvldb/WangLYF11, DBLP:journals/pvldb/SuchanekAS11}. Precisely, in direct matching, two given instances $s$ and $t$ are   considered as a match when their similarity, $sim(s,t)$, exceeds a threshold $\delta$. Typically, $sim(s,t)$ is captured by an \emph{instance matching scheme}, which is a weighted combination of similarity functions (Edit Distance, Jaccard, ect.) defined over the predicate values of $s$ and $t$ \cite{DBLP:journals/pvldb/WangLYF11,DBLP:journals/pvldb/SuchanekAS11}:  

\begin{equation}
sim(s,t) = \sum_{p \in P} {w_p\cdot sim(O(s,p,S), O(t,p,T))}  > \delta
\end{equation}


%Then, whether this match computed by the algorithm is indeed correct or not, i.e.\ refer to the same real-world entity or not, is verified by the ground truth.

\textbf{Limitations.} The above scheme assumes that $s$ and $t$ share  predicates $p$ based on which they can  be directly compared (e.g. \verb+rdfs:label+, \verb+db:incountry+). In the heterogeneous setting, $S$ and $T$ may exhibit differences in their schemas. Instead of assuming $p$, more generally, we can  define the instance matching problem in this setting based on the notion of comparable predicates $\langle p_s,p_t\rangle$. The predicate $p_s$ is a predicate in $S$, whose values can be compared with those of $p_t$, a predicate in $T$. 

For example, the instance \verb+nyt:4232+ does not share any predicate with the target instances but we can assume that  the predicate \verb+nyt:prefLabel+ ($p_s$) is comparable to the predicate \verb+rdfs:label+ ($p_t$) because they have a similar range of values. Solutions, which specifically target this setting of cross-datasets matching, employ automatic schema matching or manually find the pairs of comparable predicates \cite{DBLP:journals/pvldb/SuchanekAS11,hu_bootstrapping_2011,Song:2011:AGD:2063016.2063058}.  
%Predicates are comparable when they represent the same concept. More generally, they are comparable when matches in their values represent useful evidences for instance matching (thus, finding comparable predicates is a more general problem than schema matching, where the goal is to find predicates representing the same concept). \todo{maybe use this as motivation for our solution to the mapping problem. I DID NOT UNDERSTAND THIS COMMENT} 
Let $P_{st}$ be the set of all comparable predicates. We define the instance matching scheme for this setting as follows: 

\begin{equation}
\footnotesize
sim(s,t) = \sum_{\langle p_s,p_t \rangle \in P_{st}} {w_{\langle p_s,p_t \rangle} sim(O(s,p_s,S), O(t,p_t,T))} > \delta
\label{eq:sim}
\end{equation} 

%The direct matching paradigm has proven to be useful in the homogeneous setting where instances share some common predicates $p$. In the heterogeneous setting, we showed above that this paradigm would require some pairs of comparable predicates. 
Since the direct overlap at the level of predicates (or values) between instances may be
%do not exist, or are 
%too few 
too small to perform matching in the heterogeneous setting, we propose class-based matching. 

\textbf{SERIMI.} Class-based matching can be applied in combination with direct-matching, on top of the candidate selection step; as illustrated in Fig.\ \ref{fig:overview}.  Candidate selection yields a set of candidates $C(S)$, which is refined by a module that combines class-based and direct matching to obtain $M(S)= \{M(s) | s \in S :  M(s) \subseteq C(s) \in C(S)   \}$.  

%Considering our example, the set $M(S)$ for those candidate sets would be: $M(S)=$\{$M$(\verb+nyt:2223+), $M$(\verb+nyt:5962+), $M$(\verb+nyt:5555+)\}; with   $M$(\verb+nyt:2223+) = \{\verb+db:San_Francisco+\}, 
%  $M$(\verb+nyt:5962+) = \{\verb+db:Belmont_+\verb+California+\}  and 
  %$M$(\verb+nyt:555+\verb+5+) = \{\verb+db:San+  \verb+_Jose_California+\}.
 

While this paper focuses on class-based matching, we are also proposing a complete instance matching pipeline called SERIMI. Existing state-of-the-art solutions are adopted for the candidate selection and direct matching components of SERIMI. Candidate sets $C(s) \in C(S)$ are determined for each instance $s \in S$ using a predicate value of $s$ as key. The predicate is selected automatically based on the notion of coverage and discriminative power of predicates, also employed by \cite{Song:2011:AGD:2063016.2063058}. Then, for direct matching, we use simple schema matching to compute comparable predicates $P_{st}$. The matching between a source instance $s$ and a target instance $t$ is then performed using values of predicates in $P_{st}$. As $sim(s,t)$, we use Jaccard similarity. 
%These predicates We use Jaccard as similarity function where we compared the set of bigrams $W_s$ and $W_t$ of the values of the predicate pair $\langle p_s, p_t \rangle\in P_{st}$. 
The main difference to existing works lies in the selection of the threshold: 
%Then, an instance $t$ is considered as a correct match for $s$ if $sim(s,t) \geq \delta$ 
for this, we use the same method that we propose for class-based matching. 
%This statistics-based method be discussed in detail. 

%or its score $sim(s,t)$ is the top-1 among all instances $ t \in C(s)$ (called \textit{TOP-1 approach}). The Top-1 approach makes sense for those cases in the heterogeneous scenario where datasets are duplicate-free, i.e., when the one-to-one mapping between source and target exists. 

We observe in the experiments that this simple combination of direct- and class-based matching  produces good quality. In SERIMI, direct- and class-based matching components are treated as black boxes that yield two scores considered independent. SERIMI multiplies and normalizes these scores to obtain a value in [0,1]. 
%, and selected the correct matches $M$ using the Threshold or TOP-1 approach. In the next section, we describe how we compute the class-based matching score.