\section{Introduction}
 
%\begin{enumerate}
%\item In this paper, we propose a unsupervised approach to leverage the semantics-based and similarity-based ways for addressing the problem of instance matching on the Semantic Web.
%\item Describe the problem in the context of the semantic web, because it is where lies the strength of SERIMI.
% \item Describe as contribution:
% \item SERIMI leverage the semantic definitions owl:IFP from statistics computed on-the-fly in the data, and uses it to infer same:As relations between instances.
% \item SERIMI proposes a kernel that is does not require the source and target datasets to overlap beside a common key that they may share. SERIMI is not limit to this and can make use of direct comparison when the source and target data indeed overlap.
 %\item SERIMI proposes a grouping of the source instances that gives the maximum information hint about the target instances.
 %\item We define a measure of data complexity that indicates how complex a dataset is and it is used by SERIMI to improve the matching process.This data complexity measure is used to select the necessary and sufficient combination of attributes used in the matching process.
%\end{enumerate}


%\textit{The Semantic Web} is an ongoing initiative by \textit{W3C} to promote standards and methods for data integration at web scale. 
A large number of datasets has been made available on the Web as a result of initiatives such as Linking Open Data. As a general graph-structured data model, RDF\footnote{http://www.w3.org/RDF/} is widely used 
%for data representation and exchange, 
especially for publishing Web datasets. In RDF, an entity, also called an instance, is represented via $\langle subject,$ $predicate, object \rangle$ statements (called \emph{triples}). Predicates and objects capture \textit{attributes} and \textit{values} of an instance, respectively (terms that are used interchangeably here).  Table \ref{table:examples} shows examples of RDF triples. 

\begin{table}[ ]
\centering
\caption{Instances represented as RDF triples.}
%\scriptsize\tt
%\scriptsize
%\small
\begin{tabular}{|l|l|l|}
\hline  
\multicolumn{3}{|c|}{Source Dataset} \\
\hline
Subject & Predicate/Attribute & Object/Value \\
\hline
nyt:2223 & rdfs:label & 'San Francisco' \\
nyt:5962 & rdfs:label & 'Belmont' \\
nyt:5962 & geo:lat & '37.52' \\
nyt:5555 & rdfs:label & 'San Jose' \\
nyt:4232 & nyt:prefLabel & 'Paris' \\
 

geo:525233 & rdfs:label & 'Belmont' \\ 
geo:525233   & in:country & geo:887884 \\ 
geo:525233  & geo:lat & '37.52' \\
\hline

\multicolumn{3}{|c|}{Target Dataset} \\
\hline
Subject & Predicate/Attribute & Object/Value \\
\hline
    db:Usa & owl:sameas & geo:887884 \\ 
  db:Paris & rdfs:label & 'Paris' \\ 
  db:Paris  & db:country & db:France \\
db:Belmont\_France & rdfs:label & 'Belmont' \\ 
db:Belmont\_France  & db:country & db:France \\  
db:Belmont\_California & rdfs:label & 'Belmont' \\ 
db:Belmont\_California  & db:country & db:Usa \\  
 
db:San\_Francisco & rdfs:label & 'San Francisco' \\ 
db:San\_Francisco   & db:country & db:Usa \\ 
db:San\_Francisco     & db:locatedIn & db:California \\ 
  db:San\_Jose\_California & rdfs:label & 'San Jose' \\ 
  db:San\_Jose\_California     & db:locatedIn & db:California \\ 
    db:San\_Jose\_Costa\_Rica & rdfs:label & 'San Jose' \\ 
   db:San\_Jose\_Costa\_Rica  & db:country & db:Costa\_Rica \\ 
  \hline 
  \end{tabular} 
\label{table:examples}

\end{table}
%Fig.\ \ref{fig:graphexample1} shows an example of a RDF graph  representing two instances (Belmont in France and Belmont in California) by the  four triples: $\langle$\verb+db:Belmont_France+, \verb+rdfs:label+, 'Belmont'$\rangle$,  $\langle$\verb+db:Belmont_France+, \verb+db:country+, \verb+db:France+$\rangle$, $\langle$\verb+db:Belmont_California+, \verb+rdfs:label+, 'Belmont'$\rangle$ and $\langle$\verb+db:Belmont_California+, \verb+db:country+, \verb+db:Usa+$\rangle$. 



%, which comprises several instance descriptions. 

%Many ontologies are available to represent these RDF triples, they define common predicates, classes and the semantic of the data, so that machines can reasoning over the data. 
Besides RDF, OWL\footnote{http://www.w3.org/TR/owl-features/} is another standard language for knowledge representation, widely used for capturing the ``same-as'' semantics of instances. Using  \verb+owl:+ \verb+sameas+, data providers can make explicit that two distinct URIs actually refer to the same real world entity. 
%One major goal of the Open Linking Data initiative is to establish these same-as links to connect and combine instance descriptions that reside in different datasets. 
The task of establishing these same-as links is known under various names such as entity resolution and \emph{instance matching}. 
% is clearly essential to the quality and growth of Web data. Generally, it is relevant in any data integration scenario in which datasets vary in the syntactic representation of entities. 
%On the Web, solutions can be distinguished into two types. 

\textit{Semantic-driven approaches} \citep{DBLP:conf/ecai/EuzenatV04,  DBLP:conf/iceis/LemeCBF09, DBLP:conf/semweb/NiuRZW11}  use specific OWL semantics, such as explicit \verb+owl:+ \verb+sameas+ statements, to allow the same-as relations to be inferred via logical reasoning. 
%Clearly, this type of approaches is only effective when datasets are represented in OWL and capture the semantics necessary for reasoning. 
Complementary to this,  \textit{data-driven approaches}   derive same-as relations mainly based on attribute values of instances \citep{dorneles_approximate_2011}. 
%For instance, \verb+nyt:5962+ is recognized as being the same as \verb+db:BelmontCalifornia+ because they both have 'Belmont' as \verb+rdfs:label+. 
While they vary with respect to the selection and weighting of features, existing data-driven approaches are built upon the same paradigm of \textit{direct matching}, namely, two instances are considered the same when they have many attribute values in common \citep{fellegi_theory_1969}. 
%By direct matching two instance representations, they refer to the same real word entity if their similarities exceed a threshold. 
Hence, they produce only high quality results when there is sufficient overlap between instance representations. Overlap may, however,   be small in heterogeneous datasets; especially, because the same instance represented in two distinct datasets may not use the same schema.

 

For example, in Table \ref{table:examples}, the source instance \verb+nyt:5962+ and the target instances   \verb+db:Belmont_France+ and \verb+db:Belmont_California+ share the same \verb+rdfs:label+ value, i.e., the string 'Belmont' (see  Fig. \ref{fig:graphexample1}). However, \verb+rdfs:label+ is the only attribute whose values overlap across both datasets, as the source and target graphs use rather distinct schemas. This overlap alone is not sufficient to determine whether \verb+nyt:5962+ is the same as  \verb+db:Belmont_France+ (or \verb+db:Belmont_California+). In this scenario of \emph{instance matching across heterogeneous datasets}, direct matching alone   cannot be expected to deliver high quality results.

%For example, in Table \ref{table:examples}, the source instance \verb+nyt:5962+ and the target instances   \verb+db:Belmont_France+ and \verb+db:Belmont_California+ share the same \verb+rdfs:label+ value, i.e., the string 'Belmont' (see  Fig. \ref{fig:graphexample1}). However, \verb+rdfs:label+ is the only attribute whose values overlap across both datasets, as the source and target graphs use rather distinct schemas. In order to find whether the instances match, existing approaches directly match information from the source against the target. This matching might be based on finding attributes and values (features) the instances have in common or computing the similarities between the instances using various functions and thresholds. However, the number of features they have in common is small and also the similarity is too low to find matches in this \emph{heterogeneous} scenario. 
%whether \verb+nyt:5962+ is the same as  \verb+db:Belmont_France+ (or \verb+db:Belmont_California+). direct matching alone cannot deliver high quality results.
%to find matches in this scenario.  of \emph{instance matching across heterogeneous datasets},  if the overlap is small, the similarity small overlap alone is not sufficient to 


\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{./Chapters/Chapter3/dm.pdf}
\caption{Examples of instances that share a common attribute value.} 
\label{fig:graphexample1}
\end{figure} 

%Some authors\cite{DBLP:conf/www/HuCQ11} propose to combine the semantic-driven and data-driven approaches to obtain the best of both worlds. For instance, in Table 1, we could find the match for \textit{geo:525233} is \textit{db:Belmont\_California} by direct matching their \textit{rdfs:label} and reinforce the match by inferring that \textit{db:Usa} is the same than \textit{geo:887884}. 
%
%Although these approaches are feasible, the question about why and when they work, still remains. Due heterogeneous nature of the Semantic Web data, which is incomplete, noisy and diverse, an import question for instance matching is:  What characteristics the data must have to the data-driven or semantic-driven approach work with its highest accuracy? Is there any other data characteristic that can be exploited to improve the accuracy of those methods?

\textbf{Contributions.} We provide a (1) \emph{detailed analysis} of many datasets and matching tasks investigated in the OAEI 2010 and 2011 \citep{Euzenat10, DBLP:conf/semweb/EuzenatFHHMNRSSSST11} instance matching benchmarks. We show that tasks greatly vary in their complexity. There are difficult tasks with a small overlap  between datasets that cannot be effectively solved using state-of-the-art direct matching approaches. 
%It is based on the coverage and discriminative power of the instances' predicates. This complex measure is used to select the necessary and sufficient combination of predicates so that the overlapping of information between instance representations is maximized. Consequently producing the highest matching accuracy.
Aiming at these tasks, we propose to use direct matching in combination with (2) \textit{class-based matching (CBM)}. 
%An unsupervised method complementary to direct matching and semantic matching approaches. 
%It can be applied in combination with the direct matching approaches. 
%specially when the direct matching cannot solve ambiguity in the data due to the lack of overlapping information. 

%Often, there is type information available in data such as RDF, capturing the class(es) to which instances belong to. However, even when class information is complete (which cannot be assumed), classes from different datasets greatly vary: for instance, \ Nation and \ Country might be given as classes for the data in our example, however they are associated with completely different attributes, except for \verb+rdfs:label+. For the heterogeneous setting, we do not assume these explicitly defined classes are available or useful for matching. Instead, we infer classes from the data.
In this chapter, we employ the following class notion: a class is  set of instances where each instance in this set must share at least one feature (vide Definition \ref{definition:featuredef})  in common  with any other  instance in this set. 

Based on this notion, CBM works as follows: given a class  of instances from the source dataset (e.g., \verb+nyt:2223+ and \verb+nyt:5962+), called the \emph{class of interest}, and a set of candidate matches retrieved from the target via direct matching (e.g.,  \verb+db:San_Francisco+, \verb+db:Belmont_France+ and \verb+db:Belmont_+\verb+California+), CBM aims to refine the set of candidates by filtering out those that do not match the class of interest. This matching  however does not assume that the class semantics  are explicitly given so that a direct matching at the class level is possible between the source (e.g.\ Nations) and target (e.g.\ Countries).
%method infer the Sameas relations by detecting a class of target instances among those candidates that contains at least one match of each source instance. 
Instead, CBM is based on this idea: given the instances are known to form a class (they have some features in common), their matches should also form a class in the target dataset (matches should also have some features in common). Thus, correct matches can be found by computing the subset of candidates in which members have the most features in common. Because these candidates correspond to source instances (as computed by the direct matching method), the class they form correspond to the source instance, i.e. the instances found by CBM belong to a class, which matches the class of interest. Note that in this process, the source and target instances are compared only during the candidate selection step. During class-based matching, \emph{only data from the target dataset} is needed. This is the main difference to direct matching, which compares the source and the target data. 


%\begin{figure}[h]
%\centering
%\includegraphics[width=0.3\textwidth]{cbm.pdf}
%\caption{A set of instances with common features defines a  class, implicitly. A set of source that forms a class should match to a  set of %target instances that also forms a class. } 
%\label{fig:cbm}
%\end{figure} 

%Instead, it is a data-driven approach, which derives the class of interest from information in the target. Then, candidates in the target are compared with this latent representation of the class of interest. During this process, there is no comparison between source and target but only data from the target is used for matching. 

%For example, in Table \ref{table:examples}, the instances \verb+nyt:2223+ and \verb+nyt:5962+ from the source dataset belong to the (implicit) class ``cities in California''. The candidates matches from the target dataset   are \verb+db:San_Francisco+, \verb+db:Belmont_France+ and \verb+db:Belmont_+ \verb+California+, as depicted in Fig.\ \ref{fig:graphexample2}.


In the example depicted in Fig. \ref{fig:graphexample1}, class-based matching would select   \verb+db:Belmont_+ \verb+California+  and \verb+db:San_+ \verb+Francisco+ as correct matches, because this subset of instances are the most similar among the candidates: they have the predicate \verb+db:country+ and value \verb+db:Usa+ in common, as depicted in Fig. \ref{fig:graphexample2}.


\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{./Chapters/Chapter3/cbmexample.pdf}
\caption{Class-based matching.} 
\label{fig:graphexample2}
\end{figure} 
 

%This matching does not involve any direct comparison between instances in the source and target datasets. Also, it does not assume the class to which \verb+db:Belmont_California+ and \verb+db:San_+\verb+Francisco+ belong to is explicitly given, so that it can be directly compared with ``cities in California''. Instead, a latent instance-based representation is inferred from the three candidates retrieved from the target in this example. 
%, and then used to refine these candidates. 
%they are the most similar class of instances (they have the same value for \textit{db:country}) among the candidates. Although this is a quite straightforward reasoning, it is effective when the target class of instances can be detected in this fashion.
%


%Besides the main idea behind class-based matching, we also propose (4) optimizations to \emph{compactly represent the class of interest} for greater efficiency and (5) a method to \emph{automatically select the threshold }used for filtering matches. 

We (3) evaluated this approach, called SERIMI, using data from OAEI 2010 and 2011, two reference benchmarks in the field.
%, and compared the results with OAEI results as well as those obtained from other state-of-the-art systems. 
These \emph{extensive experiments} show that SERIMI yields superior results. Class-based matching achieved competitive results when compared to   direct matching; most importantly, the improvements are complementary,   achieving good performance when direct matching's performance was bad. Thus, using only a simple combination of the two, our approach  greatly improves the results of existing systems. Considering all tasks in OAEI 2010, it increases average F1 result of the second best by 0.21 (from 0.76 to 0.97). For 2011 data, SERIMI also greatly improves the results of recently proposed approaches (\emph{PARIS}~\cite{DBLP:journals/pvldb/SuchanekAS11} and \emph{SIFI-Hill}~\cite{DBLP:journals/pvldb/WangLYF11}). Compared to the best system participating in OAEI 2011, SERIMI achieved the same performance. However, while that system leverages domain knowledge and assumes manually engineered mappings, our approach is generic, completely automatic and does not use   training data. 
%on any we do not use any domain knowledge, which was exploited by the best baseline to gain accuracy in a few specific tasks evaluated.  

\textbf{Outline.} This chapter is organized as follows: In Section \ref{chapter:serimi2}, we introduce some  definitions. In Section \ref{chapter:serimi3}, we provide an overview of SERIMI. 
%Section 4 introduces the problem of instance matching over heterogeneous data, where we introduce an entropy based way of finding IFP and Sameas relation on data.
In Section \ref{chapter:serimi4}, we discuss class-based matching and in Section \ref{chapter:serimi5} we propose a solution. In Section \ref{chapter:serimi6}, we present  a detailed analysis of matching tasks. Also, we discuss experiments and results. In Section \ref{chapter:serimi7}, we discuss related works. Finally, we conclude in Section \ref{chapter:serimi8}. 