
\section{Introduction}
\emph{Instance matching} \cite{dorneles_approximate_2010} refers to the problem of determining whether two descriptions are about the same real-world entity. Also known as object consolidation, duplicate detection, record linkage, entity resolution or co-reference reconciliation, it represents a crucial task in data integration, entailing non-trivial problems that are actively studied in many research communities. Traditionally, research in this context was focused on the \emph{single-domain setting}, where data come from the same or similar datasets. Basically, given the descriptions of entities available as records in databases, RDF descriptions on the Web, etc., the instance matching task breaks down to the core problems of (1) finding a suitable \emph{representation} (i.e., selecting attributes), (2) using this for \emph{matching},  
%a source record against candidate records in the same or other target datasets, 
and (3) finally \emph{selecting} the most similar ones (according to a threshold). 
%For the single-domain setting, different solutions have been proposed to solve these individual sub-problems. 
There are \emph{data blocking} techniques that based on simple representations of entities, can quickly identify candidate records \cite{hernandez_merge/purge_1995,elmagarmid_duplicate_2007}. Then, for more sophisticated and \emph{effective matching}, there are different types of similarity measures \cite{dorneles_approximate_2010,branting_comparative_2003,hadjieleftheriou_approximate_2009}, and different techniques for \emph{learning} the right combination of attributes, similarity measures and threshold to be used for computing and selecting the resulting matches \cite{xiao_top-k_2009}. 

While these single-domain solutions have shown high quality results in enterprise data integration scenarios, their applicability to the large-scale heterogeneous Web setting is less clear. Assumptions implicitly embodied in these solutions no longer apply. Firstly, in the larger scale Web setting that involves multiple domains, it is more expensive to obtain the necessary amount of \emph{training data}, because data are noisy and diverse in terms of attributes representations, which makes difficulty to detect and label similar instances. 
%for learning the right combination of attributes, similarity measures and thresholds. 
More importantly, instances are assumed to have similar representations (i.e. schemas) so that a subset of their common attributes can be selected for matching. 
%Then, similarity measures and thresholds can be fine-tuned for these attributes. 
This \emph{similar representation} assumption however, holds only for instances that are from the same dataset -- or similar ones with largely overlapping schemas that have been aligned upfront -- but it does not apply to instance data on the Web that come from heterogeneous datasets. Therefore, single-domain instance matching tools that applies a pre-processing step of schema matching will not solve the cases that we are considering in the heterogeneous settings, where there is no overlapping between schemas.
%That is, existing solutions assume training data can be obtained for the same domain, and instances exhibit the same or aligned schema, while Web data come from different domains and are associated with different schemas.  
The following example illustrates the challenges in Web data integration.

\textbf{Example. }There are two descriptions of the \verb+anemia+ disease that were extracted from two different datasets. The first one is the Diseasome dataset, which specifically represents data from the Life Science domain. The second one is DBpedia, a cross-domain encyclopedic type of dataset. 
%, which contains data extracted from Wikipedia. 
While the description from Diseasome describes genetic aspects (Fig. 1, line 1), the one from DBpedia captures general aspects (Fig. 1, line 7) of \verb+anemia+. The only token they have in common is ``Anemia'', while their schemas do not overlap at all. Using existing blocking techniques~\cite{papadakis_efficient_2011} that compare instances simply by tokens, these instances can be identified to be candidate matches. However, this token match is not enough to guarantee these instances refer to the same disease, because this blocking may yield other candidates, such as \verb+anemia+ as a plant, as shown in Fig. 1, line 12. Applying more sophisticated  techniques to disambiguate these candidates is not possible in this setting because there are no common attributes that can be selected. Hence, attribute-specific learning and tuning of similarity measures and thresholds \cite{dorneles_approximate_2010} do not apply.  

\begin{figure} 
 
\centering
\includegraphics[width=0.8\textwidth]{fig1.png}
\caption{Examples for ``Anemia'' in N3 notation (prefixes are used for brevity).} 
\vspace{-10pt}
\end{figure} 


%Recently, active research towards Web-scale integration can also be observed - largely due to the large increase in availability and importance of Web data. For instance, this problem is actively studied by Google researchers \cite{DBLP:conf/sigmod/TalukdarIP10} as well as the large community of Semantic Web researchers due to the rapidly growing Linked Data Web (LDW)\footnote{http://linkeddata.org/}. To date, the LDW contains hundreds of publicly available datasets, capturing billions of resource descriptions in RDF (Resource Description Framework). Creating links through matching RDF resources across datasets is the topmost goal actively pursued by projects such as Linking Open Data \cite{bizer_linked_2009}. As opposed to the single-domain setting, the \emph{Web setting} involves heterogeneous data associated with \emph{varying schemas} that are created to capture different domains. 
% 
Recently, a few proposals for data integration in the Web setting have been made. For instance, Google researchers actively pursue the concepts of \emph{pay-as-you-go} \cite{das_sarma_bootstrapping_2008} and \emph{search-driven Web-scale integration} \cite{DBLP:conf/sigmod/TalukdarIP10}. Initial work towards \emph{schema-level integration} in the LDW(Linked Data Web) setting has been reported \cite{gomez-perez_overcoming_2009}. However, we noted that the specific problem of instance matching in the Web setting with possibly non-aligned and non-overlapping schemas is largely unsolved. To the best of our knowledge, the only work in this direction is a schema-agnostic blocking technique that extracts tokens from entity descriptions to compute candidate matches~\cite{papadakis_efficient_2011}. However, as discussed in the previous example, blocking techniques like this one are meant for selecting candidates, while further processing is needed to refine them.  

\textbf{Contributions.} This paper introduces SERIMI, an approach that focuses on the effective matching of candidate instances resulting from blocking. It specifically addresses the mentioned challenges.
% in the Web setting: 
It is completely unsupervised and thus does not require training data.  More importantly, it supports the matching of instances that are from different domains and schemas. The technical contribution behind this work is the \emph{class-based disambiguation of instances}. Given instances of a particular class of interest in the source dataset, SERIMI quickly finds candidate matches in the target dataset, computes the class in the target dataset which corresponds to the class in the source, and finally, uses it to filter out candidates that do not belong to the class of interest. Because the target class is represented based on instances in the target dataset and is computed on-the-fly, SERIMI neither rely on knowledge about the schema nor explicit correspondences between classes (i.e. does not require schemas to be pre-aligned). We 
%implemented our approach and 
performed experiments on large-scale 
%real-world 
datasets extracted from the LDW, 
%. We show that through the disambiguation performed by SERIMI, results from token-based blocking can be substantially improved. We 
and compared SERIMI with two preliminary works recently published, RiMOM \cite{juanzi_li_rimom:_2009} and ObjectCoref \cite{hu_bootstrapping_2011}. SERIMI outperformed these alternative approaches in 70\% of the cases, and in those cases it improved average F-measure by 10\%. 

\textbf{Outline.} This paper is organized as follows. After this introduction, we discuss related work in Section 2. In Section 3, we introduce the problem of disambiguation in instance matching. In Section 4, we elaborate on our class-based disambiguation method.  Section 5 presents the experimental results, 
%, along with a comparative study against RiMON and ObjectCoref. 
and Section 6 concludes this paper.
