
\section{Related Work}

%Several \emph{instance matching techniques} have been proposed to address both the efficiency and effectiveness of instance matching over single domain settings. Below we discuss state-of-the-art approaches, focusing on those that target the heterogeneous settings.

\emph{Candidate selection} (blocking) techniques \cite{hernandez_merge/purge_1995} aim at more efficient instance matching by reducing the number of similarity comparisons between instances. 
%Based on a feature that is distinctive and can be processed efficiently (also called Blocking Key Value, BKV), instances are partitioned into blocks such that potentially similar instances are placed in the same blocks. 
%These techniques are focus on the single dataset settings, where the schema are homogeneous and the choice of a BKV is less problematic. 
%Focusing on de-duplication, an unsupervised blocking technique has been explicitly designed to work in the heterogeneous Web setting, where to tackle the heterogeneity between schemas, 
%schema attributes are ignored and the BKV 
As blocking keys, the set of all tokens that can be extracted from the instance data has been used~\cite{DBLP:conf/wsdm/PapadakisINF11}. 
%As it does not use attribute information in the BKV, we call it as schema-agnostic blocking. However, 
%the limitation of 
This approach may yield too many incorrect candidates (as shown in our experiment). 
%because it looses valuable attribute information, which would form more discriminative keys. 
The authors tackled this problem by extracting several such profiles and combining them~\cite{DBLP:conf/wsdm/PapadakisINPN12}. In principle, this is similar to using several attributes as keys. 
%,  are combined to guarantee effectiveness and pruning techniques are used to keep the redundancy controlled and to guarantee efficiency.  
There are works, which have shown that the most discriminative keys can be selected by considering their discriminative power and coverage~\cite{DBLP:conf/semweb/SongH11}. 

%\emph{Candidate selection} applies the same blocking principles to reduce the number of comparison on the task of instance matching. Analogue to blocks, similar instances are grouped together forming candidate sets; and blocking keys are called candidate selection schemes.  Differently from the single dataset setting, in the multiple datasets setting, the alignment between candidate scheme from different sources is required, so that we can use information from the source scheme to select candidates based on the target scheme. Song et.al \cite{DBLP:conf/semweb/SongH11} focused on this setting and uses the attribute information, and its values, in the candidate schemes. Selected predicates are those with discriminativity and coverage above a threshold. However, the mapping of the candidate schemes is manually defined in their approach. ObjectCoref\cite{DBLP:conf/www/HuCQ11} applied a self-learning algorithm to solve instance matching itself. Although most of their principles can be used to generate candidate selection schemes, a direct translation of their algorithm to this task demands much more investigation. 

For \emph{matching}, 
% are employed after candidate selection, or blocking, for disambiguating candidate matches, therefore, targeting the effectiveness of the match. Mainly, they are named 
various learning-based approaches exist that can be further distinguished in terms of training data and degree of supervision, respectively (i.e. supervised, semi-supervised, unsupervised \cite{DBLP:conf/semweb/SongH11,hu_bootstrapping_2011,DBLP:journals/tkde/LiTLL09,DBLP:conf/vldb/ChaudhuriCGK07}). 
%ObjectCoref is a supervised approach that self-learns the discriminativeness of RDF properties. Then, matches are computed based on comparing values of a few discriminative properties. RIMON is an unsupervised approach that firstly applies schema-based type of candidate selection to produce a set of candidate resources and then, uses a document-based similarity metric (cosine similarity) for disambiguating candidate resources. Chaudhuri et.al propose an supervised approach for learning the recording matching queries. Basically, it learns from a set of positive and negative matching, a matching scheme, including the similarity function and threshold. 
%Then, these matching schemes can be used to select the correct matches from set of possible candidates. 
These approaches focus on learning instance matching schemes that are optimized solely towards result quality. 
There are also works that address the efficiency of learning. In particular, the efficiency of learning has been also studied at query time, where the goal is to minimize the amount of data needed for learning~\cite{DBLP:journals/jair/BhattacharyaG07}. The authors perform collective matching, which improves the quality of matches by considering the similarities of nodes related to these matches (the collective). Techniques have been proposed to reduce the number of related nodes that have to be considered~\cite{DBLP:journals/jair/BhattacharyaG07}. In this work, we show how the number of queries can be minimized to efficiently compute the matches. The proposed technique for retrieving related nodes~\cite{DBLP:journals/jair/BhattacharyaG07} can be applied in addition (for collective matching). Furthermore, we also optimize the queries towards execution efficiency. 
%This method is rather focused on effective entity resolution than the efficiency part of the problem, which could be addressed by quickly finding candidate matches that further are refined by their method. Beside of that, they did not experiment their method on remote data endpoint, which is the focus of our work. 

%all those approaches proof  to be effective in the heterogeneous settings and some have target the efficiency part as well, none of them target instance matching over remote data endpoints, where efficiency is the main issue. The learning of matching scheme from the data over those remote endpoints are not straightforward. Differently from accessing the data off-line, in the remote scenario, the data have to be queried. The challenge is to efficiently integrate this process of querying those endpoints into the learning of the matching scheme, so that only the necessary and sufficient data to produce the correct matches are queried. 

Applicable to the \emph{on-the-fly integration} scenario, prior research on similarity joins and efficient indexes can be exploited for faster execution~\cite{DBLP:journals/pvldb/MetwallyF12}. They are complementary to our work here. We aim to select efficient queries while these works can be exploited (and implemented by the endpoints) to further optimize the processing of these queries. In principle, SERIMI~\cite{DBLP:conf/webdb/AraujoTDHS12} is able to perform instance matching over SPARQL endpoints. It focuses on refining results using a class-based strategy for pruning those, that do not belong to the class of interest. We take this idea to design class components. However, while SERIMI uses a set of instances as an instance-based class representation, we learn the class component (an explicit query-based representation of class) from data. Further, SERIMI focuses on the quality results but does not pay attention to the cost of obtaining these results, i.e. execution efficiency. Also focusing on result quality, it has been shown that through on-the-fly instance matching, entity search results can be improved~\cite{DBLP:conf/www/HerzigT12}. As opposed to these works, we propose to optimize for both result quality and execution efficiency. 

%Differently from all those methods, we target the efficiency and effectiveness of candidate selection over heterogeneous and remote data endpoints. We propose an ensemble of two matchers used for refined the candidate matches, and we show it efficiently produces competitive effective matches to those elaborated matching techniques.

 