\section{Overview}
\label{chapter:sonda2}
%In this section, we introduce the data model, discuss the problem, and finally, present a brief overview of existing solutions as well as our approach, called \emph{Sonda}.
%
%%\subsection{Data} We focus on heterogeneous Web data  RDF and other types of data that can be modelled as graphs. Closely resembling the RDF data model, we employ a graph-structured data model where every dataset is conceived as a graph $G \in\mathbb{G}$ comprising a set of triples: 
%
%\subsection{Data} 
We focus on RDF and other types of data that can be modeled as graphs. 
%Resembling the RDF model, 
%We employ a graph-structured data model where 
Let $\mathbf{G}$ denote the set of all data graphs. Then, every dataset is conceived as a graph $G \in \mathbf{G}$ comprising a set of triples: 

\begin{definition}[Data Model] A dataset is a graph $G$ formed by a set of triples $(s, p, o)$ where $s \in U$ (called subject), $p \in U$ (predicate) and $o \in U \cup L$ (object), and $U$ and $L$ denote the sets of Uniform Resource Identifiers (URIs) and literals, respectively. Each literal $l \in L$ is conceived as a bag of tokens, $l = \{t_1,\ldots,t_i,\ldots,t_n\}$, drawn from a vocabulary $V$, i.e.\ $t_i \in V$.  
\end{definition} 
With respect to this model, \emph{instances} are resources, i.e.\ URIs, that appear at the subject position of triples. An instance representation can be obtained from the data graph as follows:
\vspace{-0.3cm}

\begin{definition}[Instance Representation] The instance representation is a function $I: U \times \mathbf{G} \rightarrow \mathbf{G}$, which given an instance $s \in U$ and a graph $G \in \mathbf{G}$, maps $s$ to a set of triples in which it appears as the subject, i.e.\ $I(s,G) = \{ (s,p,o) | (s, p, o) \in G \}$. 
\end{definition} 
\vspace{-0.3cm}
 

%Notice that different definitions of instance representations can be considered as well; however, the current definition is enough to explain our method. 
We also use $P(s,G) = \{ p | (s, p, o) \in G \}$ and  $O(s,p,G) = \{ o | (s, p, o) \in G \}$ to denote the set of predicates associated with an instance $s$, also called \emph{attributes}, and the set of objects associated with $s$ via $p$, called \emph{attribute values}, respectively.
 

%Thus, an instance is basically represented through a set of \emph{predicate-value} pairs, where values are bags of tokens or URIs. 
%We will use the terms instance and instance representation interchangeably from now on. 
%Note that for the sake of presentation, only the outgoing edges $(s, p, o)$ of an instance $s$ are considered while incoming edges (triples where $s$ appears as the object) can also be added to the representation of $s$. 

\subsection{Problem - Find Candidate Matches} 
%\emph{Instance matching} is about finding instances that refer to the same real-world object based on their representations. 
% extracted from the data. 
Instance matching is the problem of finding different instance representations that refer to the same real-world entity:
% Formally, we define it as follows:

\begin{definition}[Instance Matching] Given a set of source instances $S$, a set of target instances $T$, and all $(s, t)$ in $S \times T$, instance matching is the problem of finding all pairs of instances $(s,t)$ such that $sim(s, t) > \alpha$, where $sim(s,t)$ is a similarity function, $sim: U \times U \rightarrow \mathbf{R}^+$, which for the given instances $s, t \in U$, returns a number representing a degree of similarity between $s$ and $t$.
\end{definition}

Typically, $sim(s,t)$ is captured by an \emph{instance matching scheme} that is actually a weighted combination of similarity functions defined over the instances' attribute values, i.e.\  $sim(s,t) = \sum_{p \in P(s,G_S) \cap P(t,G_T)} {w_p\cdot sim(O(s,p,G_S), O(t,p,G_T))} > \alpha$,  where $G_S$ is the source dataset containing $s$ and $G_T$ the target dataset containing $t$. 
%Every $m_i(I(s,G_s), I(t,G_t))$ is a function (e.g. Jaccard distance), which given two instances $s$ and $t$, returns the similarity between them based on their values for the predicates $p_s$ and $p_t$. 
When the overall similarity $sim(s,t)$
%obtained by combining the similarities for every pair of predicates 
exceeds the threshold $\alpha$, the instances $s$ and $t$ form a \emph{match}. Then, whether this match computed by the algorithm is indeed correct or not, i.e.\ refer to the same real-world entity or not, is verified by the ground truth. In the heterogeneous setting, the datasets $G_S$ and $G_T$ may exhibit differences in their schemas such that instead of using the same attribute $p$ that is both in $G_S$ and $G_T$, pairs of comparable attributes $\langle p_S,p_T \rangle$ have to be found. Let $P_{ST}$ be the set of all comparable pairs of attributes in $G_S$ and $G_T$. Then, the extended scheme $sim(s,t) = \sum_{\langle p_S,p_T \rangle \in P_{ST}} {w_{\langle p_S,p_T \rangle} \cdot sim(O(s,p_S,G_S), O(t,p_T,G_T))} > \alpha$ can be defined for this setting to capture that the values of the source attribute $p_S$ shall be compared with those of the target attribute
$p_T$. Accordingly, instance matching entails the main sub problems of (A) finding pairs of comparable attributes $P_{ST}$ (schema matching) as well as (B) choosing and (C) weighting them, and determining the
(D) similarity functions (e.g. Edit Distance, Jaccard) and (E) the threshold $\alpha$.


To avoid the $|S| \times |T|$ instance comparisons required for this task, solving the instance matching problem is often preceded by a \textit{candidate selection} step, which entails a subset of the sub problems mentioned above. Given an instance $s \in S$, candidate selection is the problem of quickly selecting a reduced set $T' \subset T$ of possible \textit{candidates matches} for $s$: 

\begin{definition}[Candidate Selection] Given an instance $s \in S$, the target instances $T$, 
%a pair of comparable attributes $(p_S,p_T)$, $p_S$ is in the source dataset $G_s$ and $p_t$ in the target dataset $G_t$, 
candidate selection is the problem of finding $T' \subset T$ such that $\forall t \in T'$, 
%$sim(O(s,p_S,G_S), O(t,p_T,G_T)) = T$, 
$sim_b(s,t) = true$, where the similarity function here is defined as $sim_b: L \times L \rightarrow \{true,\mathit{false}\}$, i.e.\ it returns $true$ if the given values are similar, or $false$ otherwise. 
\end{definition}

 
%In the heterogeneous setting, the ontologies between the datasets may differ, thus this candidate selection step entails the sub-problem of finding a pair of comparable predicates $\langle p_s, p_t \rangle$, called \textit{key pair}, on which the instances are compared. 
% (hence also referred to as the \emph{candidate selection} step). Instead of using a combination of similarity function predicates, 
%This candidate selection step simply employs a \textit{candidate selection schema} as $sim'(s,t)$ that is (conjunction of) blocking key(s), i.e. $\bigwedge b(s, t, \langle p_s,p_t \rangle_i)$ , where $b$ is a binary function that returns whether the two values of $p_s$ and $p_t$ match or not. 
%Here, the predicates $p_s$ and $p_t$ constitute the pair of comparable keys while their values are called \emph{key values}. Usually, $m$ is based on exact value matching or value overlap. 
Analogous to instance matching, $sim_b(s,t)$ here is actually evaluated using a single scheme, called the \emph{candidate selection scheme}, which is a conjunction of similarity functions, i.e.\ $sim_b(s,t) = \bigwedge_{\langle p_S,p_T \rangle \in P_{ST}} {  sim_b(O(s,p_S,G_S), O(t,p_T,G_T))}$. In this context, $\langle p_S,p_T \rangle$ is has also been referred to  as blocking key, or blocking key pair in a setting with heterogeneous datasets. Note the difference to instance matching is that here, a relaxed notion of similarity is used. Instead of a degree of similarity, we only need to know if instances are similar or not. This eliminates the sub problem of finding the threshold. Also, the similarity function is assumed to be defined manually and all attributes are of equal importance such that finding weights is no longer a problem. As a result, candidate selection targets only two of the five main sub problems above, namely (A) finding comparable attribute pairs and (B) choosing the most selective ones to be included in the scheme. 

 

This chapter deals with the \emph{candidate selection problem over remote endpoints}. In particular, we focus on the \emph{on-the-fly} pay-as-you-go setting where we are interested in finding candidate matches for all instances $I$ in $G_s$, or a subset $S$ of $I$, where both the tasks of learning how to determine candidate matches, i.e.\ the \emph{scheme}, and retrieving them, have to be performed online (over remote endpoints). Without loss of generality, we assume that the instances in $G_s$ can be grouped into sets of source instances $S$ that belong to the same class of interest $Z$, i.e.\ $\forall (s \in S). (s, \verb+rdf:type+, Z) \in G_S$, where \verb+rdf:type+ is an attribute predefined in RDF representing that $Z$ is a class of $s$. A possible goal is to find target matches $T'$ in $G_T$ for source instances $S$ in $G_S$ that belong to the class $Drug$, i.e.\ $\forall (s \in S). (s, \verb+rdf:type+, Drug) \in G_S$.   We do so because class information expressed as RDF properties (e.g. \verb+rdf:type+) can be used to produce more selective queries. Those queries result in better quality candidate selection. However, class information is not necessary for the method to work. As we will show, dropping this assumption results in only a small loss of quality.
 
%This paper deals with the \emph{candidate selection problem over remote endpoints}. In particular, we focus on the \emph{on-the-fly} pay-as-you-go setting where we are only interested in finding candidate matches for a few given source instances $S$ (instead of all instances in $G_s$), where both the tasks of learning how to determine candidate matches, i.e.\ the \emph{scheme}, and retrieving them, have to be performed online (over remote endpoints). In particular, we focus on a set of source instances $S$ that belong to the same class of interest $Z$, i.e.\ $\forall (s \in S). (s, \verb+rdf:type+, Z) \in G_S$, where $rdf:type$ is a attribute predefined in RDF representing that $Z$ is a class of $s$. For instance, a possible goal is to find target matches $T'$ in $G_T$ for source instances $S$ in $G_S$ that belong to the class $Drug$, i.e.\ $\forall (s \in S). (s, \verb+rdf:type+, Drug) \in G_S$.   



%\textbf{Challenges.} Compared to the single dataset setting, this problem entails additional challenges especially when the data is heterogeneous not only at the data but also schema level. In this regard, \emph{data-level heterogeneity} means that instances referring to the same object may have different values for a the same property, or different syntactical representations of the same value. (1) \emph{Schema-level heterogeneity} means that instances referring to the same object may be represented by different predicates, or different representations of the same predicates. Finding different representations of the same predicate is part of a problem also known as schema matching. Another challenge is (2) \emph{efficiency}: 
%%for instance matching, a scheme is needed to determine how to compare two given instances; we will show that 
%especially with schema heterogeneity, we will show that finding instance matching schemes (defined below)
%%for matching across heterogeneous datasets 
%involves a larger search space. 
%% and more feedback information (training data). 
%Learning the best schemes is particularly expensive in this setting where data is retrieved on-the-fly from remote endpoints. This on-the-fly matching not only entails performance problems during learning of the scheme itself but also to execute it, to retrieve candidate matches.

%\textbf{Instance Matches.} An \emph{instance matching scheme} can be more precisely defined as a (weighted) combination of similarity function predicates, $\sum_i{w_i m(s, t, p_i)} > \alpha$.  Every $m(s, t, p_i)$ is a function (e.g. Jaccard distance), which given two instances $s$ and $t$, returns the similarity between them based on their values for the predicate $p_i$. When the overall similarity between $s$ and $t$ obtained by combining the similarities for every $p_i$ exceeds the threshold $\alpha$, they form a \emph{match}. 
%Clearly, such a scheme focuses on data-level heterogeneity. 
%Given $s$ and $t$ are in the source and target datasets, respectively, and these datasets vary in schema, an extended scheme, e.g. $\sum_i{w_i m(s, t, \langle p_s,p_t \rangle_i )} > \alpha$, is needed to capture that the values of the source predicate $p_s$ shall be compared with those of the target predicate $p_t$. 
%In other words, when the predicates in the source and target are not the same, comparable pairs of predicates have to be found and incorporated into the scheme. 
%The similarity of $s_i$ and $s_j$ is computed by combining the similarity values obtained for all individual pairs of comparable predicates captured by the scheme. They are considered as a \emph{match} when it .
%Thus, this problem entails the subproblems of (A) finding the pairs of comparable predicates $\langle p_s, p_t \rangle$ (schema matching) as well as (B) choosing and (C) weighting them, and determining the (D) similarity functions $m$ and (E) thresholds $\alpha$.  


%Typically, candidate selection is performed as a preprocessing step, producing results that are further refined by a more effective instance matcher. 
% that also tunes the weights and threshold to obtain better results. 


%\begin{figure} [ht]
% 
%\centering
%\includegraphics[scale=0.5]{p1.png}
%\caption{Instance matching process overview.} 
%\vspace{-2pt}
%\label{fig:space}
%\end{figure} 

\subsection{Existing Solutions} 
State-of-the-art instance matching / candidate selection methods are based on supervised learning, leveraging training data to evaluate the errors and refine the learned candidate schemes. Optimal schemes found are those which maximize the coverage of positive examples while avoiding negative examples~\citep{DBLP:conf/vldb/ChaudhuriCGK07}. 
%They are geared towards homogeneous datasets, focusing on the learning of schemes of the type $\sum_i{w_i m(p_i)} > \alpha$ as discussed above. It has been shown that in fact, 
%the underlying learning strategies can also be used to obtain the extended schemes $\sum_i{w_i m(\langle p_s,p_t \rangle_i )} > \alpha$. Instead of all combinations of individual predicates, the search space would have to include all combinations of all possible pairs of predicates. 
%Considering not only all combinations of individual predicates but a search that comprises all combinations of all possible pairs of predicates, existing work~\citep{DBLP:conf/vldb/ChaudhuriCGK07} proposes a method to obtain a range of candidate
%selection schemes, including $\sum_i{w_i m(\langle p_s,p_t \rangle_i )} > \alpha$. 
However, the required training data has to be preprocessed off-line.

In this work, we focus on solutions for on-the-fly candidate selection over possibly remote endpoints, for which the availability of instance-level training data cannot be assumed. 
%and can be executed on
%Since obtaining representative training data across datasets is difficult, recent approaches that specifically target heterogeneous data derive schemes directly from the data. However, unsupervised approaches of this kind focus on the more simpler problem, namely the learning of the candidate selection schemes $\bigwedge m(\langle p_s,p_t \rangle_i)$. 
A previously proposed unsupervised approach~\citep{DBLP:conf/semweb/SongH11}, called \emph{S-based}, assumes precomputed schema mappings such that the comparable attribute pairs are known. Then, it chooses them based on their coverage and discriminability, two metrics derived directly from the data reflecting the number of instances a given attribute can be applied to and how well it distinguishes them. 
%Based on manually defined coverage and discriminability threshold, the best pairs of comparable blocking keys are selected. 
%Instances are then indexed by their BKVs and candidate sets are formed by searching on the index for overlapping tokens. This candidate selection is combined with another matcher, which further applies approximate string matching on the BKVs of the remaining candidates and filter those that are below a given threshold. 
%In this step, both the similarity function and the threshold are manually defined. Finally, the candidates sets generated are delegated to an arbitrary more elaborated matcher that finds the exact matches. 
An alternative method to that, referred to as S-agnostic, has been proposed in ~\citep{DBLP:conf/wsdm/PapadakisINF11} as a schema-agnostic unsupervised approach to candidate selection. It does not use attributes for matching but treats instances simply as bags of value tokens. 
%Instances form matches when they have some value tokens in common. Therefore, 
Instances sharing the same token (in any attribute) are placed in one candidate set, i.e.\ the similarity function is based on value token overlap. This approach does not require any effort for learning the scheme and is particularly suited when there is a lack of schema overlap such that only few or no comparable attributes exist.
%The problem with this is that the candidate sets produced are highly redundant because instances are often placed in multiple candidate sets. 
%Consequently, this work employs much more additional processing to further refine these candidate sets. 
We will discuss how S-agnostic and S-based can be adapted for on-the-fly candidate selection and used as non-trivial baselines.  

%\subsection{Existing Solutions vs. Our Solution}
%In this work, we tackle the problem of instance matching. However, we focus on the problem of learning the candidate selection scheme and simply use the resulting scheme in combination with an existing matcher to refine candidate results. 

%It has been shown that the use of schema knowledge improves precision but considerably decreases the coverage of correct matches~\citep{DBLP:conf/wsdm/PapadakisINF11} (recall). In this work, we propose a supervised learning strategy to incorporate predicate information into the candidate selection scheme that considerably improves both measures compared to this previous work~\citep{}. \todo{what have to be cited here? what previous work do you mean?}

\subsection{Sonda}

The primary distinguishing feature of Sonda lies in the granularity of the learned scheme. Instead of using a single scheme for all instances, Sonda optimizes a scheme for every individual instance. More precisely, candidate selection is formulated as a querying problem where for every instance, the learned scheme is a set of candidate selection queries. 
% that can be used to retrieve matches from the target dataset. 

%This separate treatment of instances is introduced to specifically deal with the heterogeneity problem: for instance, two distinct drugs on Sider dataset, named Alpraxolan and Morphine, have two different schemes for mapping to the same drugs on Drugbank Dataset. Alpraxolan uses the scheme $\langle \verb+sider:label+,\verb+drugbank:drugname+ \rangle$, while Morphine uses the scheme $\langle \verb+sider:name+, \verb+drugbank:synomym+ \rangle$,  Using these more fine-grained schemes, we show that the quality of results produced by our approach is superior than those produced by existing supervised~\citep{DBLP:conf/vldb/ChaudhuriCGK07} and unsupervised approaches~\citep{DBLP:conf/semweb/SongH11}. 

Further, an aspect neglected so far is time efficiency. Previous work on finding the scheme as discussed above focuses on the quality of matches. 
%On the other hand, there are works on executing similarity joins and building blocking indexes, focusing on how to process the schemes efficiently. In other words, more emphasis is put on the efficiency of execution and less on the efficiency of learning the scheme. The efficiency of learning is crucial when this has to be performed on-the-fly over remote endpoints. Further, how well a scheme can be optimized for time efficiency depends on the nature of the scheme itself. Some schemes are inherently expensive, requiring a large amount of data to be loaded and to be joined. 
%Further, schemes that produce the same result quality may vary in terms of runtime efficiency. 
%Thus, to improve execution performance, schemes shall be selected not only based on result quality but also performance during the learning phase.  
We consider time as an additional optimization objective such that optimal schemes (queries in Sonda) are those that 
yield high quality candidates and can be executed efficiently. Moreover, not only the execution but also the learning of schemes should be time efficient. 

\begin{figure*} []
%\vspace{-10pt}
\centering
\includegraphics[width=1.0\textwidth]{./Chapters/Chapter4/p1.pdf}
\caption{The process of learning queries and executing them. } 
 
\label{fig:template}
 
\end{figure*}
  
The overall process of learning and executing queries is illustrated in Fig.\ \ref{fig:template}. The inputs include a set of instances from the source dataset. Selecting candidates 
%for those source instances 
from the target dataset consists of four main tasks. Firstly, (1) comparable attributes are determined (2) and then iteratively for every source instance, queries are constructed using the comparable attributes and information in the instance representation. (3) The efficient and effective queries are selected and executed to retrieve candidates. The selection of queries is performed through heuristic-based search optimization \citep{DBLP:books/daglib/0023820}, which for every instance, decides which queries to be used for execution and when to move on to the next iteration (next instance). (4) During this iterative process, the retrieved candidates are treated as additional information (i.e.\ training example) for learning queries more refined than those that could be derived from comparable attributes in step 2. 

%We describe tasks 1, 2 and 4 to illustrate the learning of queries in the next section and subsequently, discuss task 3, the optimization needed for selecting queries. 

%\todo{move the following content to next two sections}
%We pose the problem of candidate selection as a optimization problem, where, for each source instance $s$, the goal is to find a template query that selects all positive matches for $s$, avoiding negative matches. Without lost of generality, we can assume that every source instance maps to a target instance (a 1-to-1 mapping), consequently, the optimal query for this problem would retrieve a candidate sets with one element. For now, we consider that an oracle can decide if the match is correct or not. Due the heterogeneity of the data, there is no unique template query that works for all instances. Therefore, to be effective, we need to find a template query for each specific source instance.  As the number of those queries may be large, it is time prohibitive to evaluate all to find the optimal one; specially when we have to query a remote endpoint. Therefore, for every instance, we need to be time efficient by evaluating only the queries that has the highest chance to be optimal.

 
%SOLUTION

%\begin{itemize}

%\item Describe how we solve this optimization problem. 
%\item Describe that we use a iterative process because we need to refined the templates queries during the process

%\item Describe how the process starts
%%\item Describe how we solve the efficiency part of the problem
%\item Describe that we need a set of heuristic for efficiently select the best queries.
%\item Describe how this heuristic are used in the branch-and-bound framework

%\item Describe how we solve the effective part of the problem
%\item Describe that we learn the comparable predicates from the data
%\item Describe that we refined the queries during the process
%\item Describe that we consider a set of instances that belong to the same class because it is the only way to solve the problem of ambiguity at class level, when candidate sets contains instances that share the same tokens but belong to different classes.
%\item Show that class information can solve this problem, therefore making the queries more precise.
%\item Describe that we use a matcher to generate positive and negative examples
%\item Describe how those examples are used to refined the queries.  
%\end{itemize} 
%Basically, we tackle this problem in two stages. First, we build all possible effective template queries by sampling the source and target data. In this process we learn the most discriminative comparable predicates and each pair becomes one clause template query. Then, for each source instance, we use a branch-and-bound optimization algorithm that searches for those queries that have the highest chance to retrieve a optimal candidate set, through tree-shaped space compose of all found template queries. 

%We start the process with a initial set of template queries. Aiming to approximate our solution to the optimal solution, we approach this problem in an iterative fashion, where at each iteration we make use of a set of policies to decide whether or not evaluate a query. Mainly, those policies are based on queries results obtained on previous iterations, and their aim is to select the queries that have the highest chance to be optimal for next iterations, therefore this process minimizes the number of queries performed. Generally, the most selective queries are selected, which are more efficient and effective, because they select less elements (and it takes less time); and they select less incorrect matches. To be effective, at each iteration we refine the template queries, building highly selective template queries with respect to our optimal criteria (maximize positive and minimize negative matches). 

%Basically, this refining process adds another clause in the template queries, called class clauses. To build class clauses, we apply a matcher over the already generated candidate sets obtaining positive and negative matches that are input to an algorithm that output a set of class clauses, which are those predicate/value pairs that select only the positive matches.  The matcher can be any approach that uses a more complex similarity measure to select the correct matches (positive examples) among the possible candidates. 

%This algorithm aims to find a best path of queries, representing a minimal set of time-efficient queries that produce high quality results.



%\dtr{the following paragpragh can be moved to beginning of eva section}
%In fact, to achieve comparable quality results, the entire process of learning and execution is faster compared to the unsupervised approach, which requires almost no time in learning.  We also compare the performance results for the entire process with the supervised approach, which requires training data to be locally available (i.e. offline learning instead of online learning over remote endpoints). Despite the overhead of retrieving data over endpoints, we show that our approach yields competitive performance. 
