\section{Introduction} 
Interlinking datasets is a crucial step towards Web data integration. A prominent initiative towards this goal is the Linking Open Data project (Linked Data),\footnote{\url{http://www.w3.org/wiki/SweoIG/TaskForces/CommunityProjects/LinkingOpenData}} which 
%makes a large number of datasets publicly available as the Linked Data cloud\footnote{\url{http://linkeddata.org/}} and continuously 
integrates publicly available datasets 
%in the cloud. Mainly, this interlinking 
%establishes 
by establishing ``same-as'' links between their instances that represent the same real-world entity. Effectively establishing these links is
% a great challenge and key success factor for the Web of data. This 
a challenging task due to data \emph{heterogeneity}. Especially in this Web of data scenario, instances are not only different at the data level (variations in names, values etc.) but also, are described using schemas that largely vary. 
%In fact, different datasets employ schemas capturing different aspects of the entity. 
The following example illustrates this, showing two pairs of instances representing the same entity from two datasets that are described using different schema attributes: 

\lstset{basicstyle=\scriptsize}
\begin{lstlisting}[]
SIDER dataset:
<sider:12312> <label> "Morphine"
<sider:12312> <type> <sider:Drug>
<sider:43434> <title> "Eosinophilic Pneumonia"
<sider:43434> <type> <sider:Drug>
\end{lstlisting}
\begin{lstlisting}[] 
DRUGBANK dataset:
<drugbank:DB00295> <drugname> "Morphine Sulphate"
<drugbank:DB00295> <synonym> "Morphine"
<drugbank:DB00295> <type> <drugbank:Drug>
<drugbank:DB00295> <affectedOrganism> "Mammals"
<drugbank:DB00001> <drugname> "Morphine"
<drugbank:DB00001> <type> <drugbank:Ingredient>
<drugbank:DB00494> <drugname> "Eosinophilic Pneumonia"
<drugbank:DB00494> <type> <drugbank:Drug>
<drugbank:DB00494> <affectedOrganism> "Mammals"
\end{lstlisting}
This linking problem 
%of finding descriptions referring to the same real-world entity 
has been studied widely
%in existing literature 
under various labels, such as entity resolution, 
%entity deduplication, 
record linkage and \emph{instance matching}~\cite{DBLP:journals/ijswis/FerraraNS11}. 
%Basically, given the descriptions of entities available as records in databases, RDF descriptions on the Web, etc., the instance matching task breaks down to the core problems of (1) finding a suitable \emph{instance representation} (i.e., selecting attributes and their values), (2) using this for evaluating different \emph{similarity measures}, and (3) finally selecting the most similar ones according to a \emph{threshold}. 
%To solve it efficiently, 
It is typically solved in two steps, namely to find candidate matches first using a relatively simple but quick matching technique, called \emph{candidate selection} (also referred to as blocking in the offline scenario)~\cite{hernandez_merge/purge_1995,MichelsonK06,elmagarmid_duplicate_2007}, and followed by a refinement step using a more advanced but also more expensive technique, the actual \emph{instance matching}. Different techniques for learning \emph{instance matching schemes} exist that capture weighted combinations of similarity attributes, measures and thresholds to be used for computing and selecting the resulting matches~\cite{DBLP:conf/semweb/SongH11,MaurouxHJAM09}. A \emph{candidate selection scheme} is typically more simple, e.g.\ uses equal weights for attributes and binary similarity that indicates true or false (instead of a degree of similarity). 
%The former also called \emph{blocking}~\cite{hernandez_merge/purge_1995,MichelsonK06,elmagarmid_duplicate_2007}, groups together instances as candidates that share a similar key value (a compact description of an instance). These candidates are then refined in the subsequent effective matching step. 
Since instance matching in this scenario has to be performed across heterogeneous schemas, it actually also involves schema matching to find some attributes from one source dataset are similar to some others in the target datasets such that their values can be compared and used to find matches. 

Given \verb+label+ and \verb+drugname+ are found to be a pair of comparable attributes, a candidate selection scheme for the example is $\langle label, drugname \rangle$. 
%, which indicates that possible matches must share have the same values on these attributes. 
For example, for the given source instance \verb+12312+ with ``Morphine'' as \verb+label+, this scheme can be employed to find in the target dataset that \verb+DB00295+ is a possible candidate, because its \verb+drugname+ also contains the key value ``Morphine''. However, using this as a \emph{general scheme for all instances} is problematic in this heterogeneous case: for example, the same scheme does not apply to the instance \verb+43434+ because it 
does not have a \verb+label+ attribute. Solving this requires learning several schemes and executing them against all instances in multiple runs. For this example, the other scheme needed would be $\langle title, drugname \rangle$. In this work, we propose \emph{instance-specific schemes} 
%as a way to improve the effectiveness of 
to address candidate selection over heterogeneous datasets, which are not applied to all but a particular instance. 


%Another characteristic specific to this setting is the \emph{evolving} nature of Web data, that often is only accessible through interfaces of 
A challenging aspect in Linked Data integration is the nature of data, which is accessible through interfaces of \emph{SPARQL endpoints}. Existing candidate selection and instance matching techniques tailored to the offline scenario assume that data is available locally (and can be indexed for efficient processing). In the Linked Data setting, this requires downloading complete datasets from endpoints and also, periodically crawling data updates to perform incremental data integration. This is however not easily possible because in many cases a copy of the entire dataset is not available for download. In these cases, data is only accessible by querying a SPARQL endpoint.
%, especially in the Linked Data setting. Linked Data is served over SPARQL endpoints. 
While SPARQL queries can be used to selectively retrieve some data, retrieving the entire dataset often hits time limits imposed by the endpoint providers. 
%Further, there is yet no standard protocols to inform about changes to the data. 
Further, the target data to be interlinked may represent only a small fraction of the entire target dataset. Downloading the whole dataset in these cases is rather inefficient. 

Instead of offline data integration, we propose to study the problem of \emph{on-the-fly candidate selection over SPARQL endpoints}. 
%, i.e., in the federated setting most natural for
%linked data.  
We cast the problem of candidate selection as querying over remote SPARQL endpoints for possible candidate matches for a given instance.
For example, candidates for \verb+12312+ can be retrieved using the following SPARQL\footnote{We use a modified SPARQL syntax for the sake of presentation.} queries over the Drugbank data endpoint: 

\begin{listingh}
select ?s where {?s ?p "Morphine"}
select ?s where {?s ?p "Morphine"@en}
select ?s where {?s <drugname> "Morphine"}
select ?s where {?s <synonym> "Morphine"}
\end{listingh} 
\vspace{0.2cm}

As the number of candidate queries might be large and the cost of executing them over remote endpoints may be high, the problem tackled in this
work is to find instance-specific schemes (i.e.\ queries) that not only deliver high quality candidates, but also, can be executed efficiently. For example, although all  queries above  can retrieve candidates for \verb+12312+, the first query may run faster than the others, while the last two queries are more selective and may be more precise. For a large number of instances, the choice of the most efficient query makes a big different in the  quality of the candidates selected and the overall execution time.  The list of issues regarding the trade-off between quality and efficiency  is broad and will be discussed throughout the paper.

The novelties over existing works are that we (1) consider schemes that are instance-specific, (2) conceive on-the-fly candidate selection as a querying problem, (3) and optimize schemes not only for effectiveness but also efficiency. 
%\begin{itemize} 
%\setlength{\itemsep}{1pt}
%  \setlength{\parskip}{0pt}
%  \setlength{\parsep}{0pt}
%	\item 

\textbf{Contributions.} 
To this end, we provide the following contributions. (1) We show how on-the-fly candidate selection can be solved by \emph{learning queries} for every instance to establish an instance-specific candidate selection scheme. As queries, we not only consider attributes but also class-related information learned on-the-fly from data obtained at query time that is treated as training examples. (2) Targeting both precision and recall requires dealing with a large number of candidate queries. To improve efficiency, we propose a \emph{heuristic-based search optimization framework} that aims to select and execute only a small number of queries discovered to be not only more quality- but also time-optimal. 
%	\item 
%We propose heuristics to prune non-optimal queries, which helps to quickly moves to the next level and the bounding policy to early terminate the process. 
%We propose heuristics to prune non-optimal queries, which helps to quickly moves to the next level of the search process. 
%\end{itemize}
(3) We evaluated the proposed approach, called \emph{Sonda}, using the Ontology Alignment Evaluation Initiative (OAEI) 2010 and 2011 instance matching benchmarks. 
%We analyze the efficiency of finding as well as executing the schemes and the quality of the resulting candidates. %To the best of our knowledge, existing work focuses on offline data integration and in particular, there exists no automated solution for on-the-fly instance matching over remote endpoints \todo{relax this, take the work from Getor and our WWW into account}. 
We adapted existing works to this on-the-fly setting in order to obtain non-trivial baselines to compare with. Compared to these baselines, Sonda took 662.34 seconds  and achieved an F1 measure of 85\% on average, while the best baseline, S-based required 843.87 seconds for an F1 measure of 73\%. Also, we used Sonda, as the candidate selection strategy, in combination with an instance matcher, SERIMI~\cite{DBLP:conf/webdb/AraujoTDHS12}, to refine candidates. Using Sonda, the results of this matcher greatly improved. The final matches produced by Sonda in combination with this matcher were about 10\% better in terms of F1, compared against the best results reported by the benchmark. 
%To the best of our knowledge, this is the first study towards on-the-fly data integration over remote data endpoints. 


\textbf{Outline.} This paper is organized as follows: we present the problem in Sec.~2. In Sec.~3, we elaborate on the technique for building candidate selection queries. The refinement and optimal execution of these queries to retrieve candidates are discussed in Sec.~4. Sec.~5 presents the experimental results, followed by related work and conclusions in Sec.~6 and 7, respectively.  
