\section{Learning Queries}
\label{chapter:sonda3}
%We start this section with preliminary definitions and then describe how to learn queries from the data.
%
%\subsection{Candidate Selection Queries} 
%We use queries to retrieve candidates for a given set of source instances. 
We use star-shaped conjunctive queries that constitute a fragment of the Basic Graph Pattern (BGP) feature of SPARQL \citep{DBLP:journals/tods/PerezAG09}.
% (a standard language for querying RDF and graph-structured data): 
%Every such query is simply a conjunction of query predicates $\bigwedge P_i$, $P_i=(s,p,o)$, where $s$, $p$ and $o$ are variables or constants. 
%More precisely a query is defined as follows:


\begin{definition} [Query, Query Component] A star-shaped query $Q(x)$ is a conjunction of query predicates defined as $Q(x) = (x).(x,p_1,o_1) \land \ldots \land (x,p_n,o_n)$ where $x$ is the only (free) variable and all $p_i$ and $o_i$ are constants, for $1 \leq i \leq n$. A \emph{query component} is a query with one predicate, i.e.\ 
%they are of the form 
$(x).(x,p,o)$.  
\end{definition} 

Intuitively, such a query can be conceived as triple patterns which together form a star-shaped pattern around the variable node $x$. Bindings to $x$ are instances. In the definition above, we do not consider the unbound variable $x$ on the object position because candidate instances are subject instances that contain a specific literal value as a predicate value. For example, the following query retrieves all instances of the type $Drug$ that have "Eosinophilic Pneumonia" as $drugname$: $$(x). (x,drugname,Eosinophilic\ Pneumonia) \land (x,type,Drug).$$ Equivalently in SPARQL, this query can be written as follows: 

\begin{lstlisting}
SELECT ?x WHERE {?x <drugname> "Eosinophilic Pneumonia" . 
?x type Drug .} 
\end{lstlisting}
\vspace{0.2cm}

We note that besides exact value matching, which in this case returns instances of the type $Drug$, with attribute $drugname$ equal to "Eosinophilic  Pneumonia", most data endpoints also support fuzzy matching. Using SPARQL endpoints, we consider using query types EXACT, as well as EXACT\_LANG, LIKE, AND and OR, as follows:  


\begin{lstlisting}
EXACT: SELECT  ?s  
WHERE {?s label ``Eosinophilic Pneumonia'' } 

EXACT_LANG: SELECT  ?s  
WHERE {?s label ``Eosinophilic Pneumonia''@en } 

LIKE: SELECT  ?s  WHERE {?s label ?o FILTER
regex(?o,``Eosinophilic Pneumonia'') } 

AND: SELECT  ?s  WHERE {?s label ?o FILTER 
regex(?o,``Eosinophilic'')&&regex(?o,``Pneumonia'')} 

OR: SELECT  ?s  WHERE {?s label ?o FILTER
regex(?o,``Eosinophilic'')||regex(?o,``Pneumonia'')} 
\end{lstlisting}
\vspace{0.3cm}

%The first query (EXACT) retrieves all instances that the predicate value matches exactly the string "Eosinophilic Pneumonia". 
The EXACT\_LANG query returns instances that match exactly on label value, with the additional constraint that the label value should be annotated with tag \textit{@en} to indicate that the value is in English. The second LIKE query retrieves instances with a $label$ value containing the string "Eosinophilic Pneumonia". Results for the third AND (fourth OR) query have a value containing the string "Eosinophilic" and (or) the string "Pneumonia". Note that these query types represent different similarity functions. While EXACT corresponds to  exact value matching, OR corresponds to value token overlap. The latter is used to implement the S-based and S-agnostic baselines. These queries implement the function $sim_b(\cdot,\cdot)$ as required by candidate selection, i.e.\ return results that match. They are however not designed for instance matching, which requires further filtering based on a degree of similarity.  
%;  and the fifth (OR) when it contains the string "Eosinophilic" or the string "Pneumonia".

%\begin{definition} [Attribute Clause]  An attribute clause defined as $\langle instance, p_t,\sim(o_s) \rangle^A$ is a triple atomic formulae where $\langle p_s, p_t \rangle \in P_{ST}_A $ and $ \langle s_s , p_s,o_s \rangle \in G_s$ and  $\sim$, called \textit{query type}, is a similarity function.
%
% 
%\begin{definition} [A Query]  A query is conjunctive query defined as:  $(subject).A_1 \land ... \land A_r$ with only one free variables $subject$, and without undistinguished variables. $A_1, ..., A_r$ are triple atomic formulae such as: $\langle subject, predicate, object\rangle$.
%\end{definition} 
%\end{definition} 
%\begin{definition} [Class Clause]  An class clause defined as $\langle instance, p_t, o_t \rangle^C$ is a triple atomic formulae where  $ \langle s_t , p_t,o_t \rangle \in G_t$.
%\end{definition} 
%\begin{definition} [Template Query]  A template query T is a query with at least one attribute clause and at most one class clause,  where the object \_ in the attribute clause $\langle instance, p_t,\sim(\_)\rangle$  is undefined.
%\end{definition}  
%\begin{definition} [Instance-Specific Query]  A instance specific query for instance $s$ in $G_s$ is a template query where the attribute clause is defined as $\langle instance,p_t,\sim(k) \rangle^A$, where $k$ is in $\langle s,p_s,k \rangle \in I(s)$
%\end{definition} 
%\subsection{Learning Queries} 
%Now, we describe how queries are learned for every instance. While 
We learn queries that consist of a small but fixed number of components. 
% (experiments will show that two components are sufficient), 
Multiple queries may be used per instance, to achieve high recall - they avoid missing positive matches when they do exist. To achieve high precision, only the results of the ``best'' queries are taken into account. 
%We want to avoid queries consisting of too many components because they are too selective, thus, miss positive matches. This intuition is employed by existing candidate selection approaches~\citep{DBLP:conf/www/HuCQ11}, which use schemes that consist of only a few keys; we show in experiments that in fact, queries with two components achieve high recall, while not compromising precision. In fact, while queries have only two components, the instance-specific scheme employed here may comprise several such queries. This is to avoid the chance of missing positive matches, when exist. 
In particular, we focus on queries that are composed of at most one \emph{attribute component} and one \emph{class component}. The attribute component is a keyword query based on a discriminative key that is used to select candidates (e.g.\ $(x). (x,drugname, ``Eosinophilic\ Pneumonia")$), while the class component is a query that selects a class of instances (e.g.\ $(x).(x,type,Drug)$). When combined, the attribute component selects candidates that share a similar key to the source instance and the class component prune those candidates that do not belong to the class of interest. Attribute components are learned from the set of comparable attributes that were obtained prior to the iterative selection process while class components are learned from positive and negative matches that are obtained during the process.  

%In particular, we focus on queries that are composed of at most one \emph{attribute component} and one \emph{class component}. The attribute component reflects a discriminative key that is used to select candidates, while the class component helps to prune candidates that share the same key value but do not belong to the class of interest. Attribute components are learned from the set of comparable attributes that were obtained prior to the iterative selection process while class components are learned from positive and negative matches that are obtained during the process. The class components are combined with the attribute components to form more refined queries. 


We now describe the main steps involved in building queries.

\subsection{Finding Comparable Key Pairs} 
Finding comparable key pairs requires finding discriminative attributes in the source and the target, $P_{S}$ and $P_{T}$, and constructing pairs of comparable attributes, $P_{ST}$. To generate $P_{S}$ and $P_{T}$, we use the approach proposed by Song.\ et al.\ \citep{DBLP:conf/semweb/SongH11}, which selects attributes based on their discriminative power and coverage. 

Instead of assuming the data to be available offline, we apply this approach over a sample of data obtained by querying the source and target endpoints. Sampling is crucial because retrieving all source and target data needed for learning is expensive. 
% in this setting. 
%Because we find candidates for instances of a class of interest $C$, we can adapt the sampling method to focus on data related to $C$. 

To obtain the source data sample, we randomly select X\% (1\% in the experiment) from the given set of source instances $S$, i.e.\ $S'\subset S$, and retrieve their representations, $I(S')=\bigcup_{s \in S'} {I(s,G_S)}$, from the source endpoint. Then, we apply Song et al.'s approach over the triples in $I(S')$ to obtain the keys $P_{S}$. 

To obtain the target keys $P_{T}$, we sample from the target dataset. Note the source data sample captures only data related to the given source instances $S'\subset S$. Likewise, we propose to select a sample from the target that is related to $S'$. 
%To obtain the keys $P_{T}$, we derive a common class $C$ that instances in $S$ belong to (the class of interest). 
%Because triples in $I(S)$ can be seen as a data-level description of $C$, we use triples in the target that match them.
In particular, we select all target instances that match attribute value of some instances in $S'$. 
For this, we use queries of the form $(x).(x, p, o_S)$ to retrieve data from the target endpoint, where $x$ and $p$ are variables and $o_S$ is a value in a triple $(s_S,p_S,o_S) \in I(S')$.
%To obtain these triple matches, we use the value $o^s$ to form queries that are processed against the target endpoint. 
For example, 
for 
%a triple $\langle s, p,o \rangle \in I(S)$, with 
$o_S = ``Eosinophilic\ Pneumonia"$, the query
\text{SELECT ?s  WHERE \{?s ?p} \text{``Eosinophilic} \text{Pneumonia''\}} would be used to obtain target instances\footnote{We use OR queries in the experiment.}. We construct queries for all values in $S'$, obtain the union of their results, and then apply Song et al.'s approach over the resulting sample to obtain $P_{T}$. 

%In this way, several queries are actually constructed for every instance in the source sample. For every instance, we stop issuing queries after more than 20 results have been obtained from the target. 
Finally, we obtain $P_{ST}$ by applying an instance-based schema matching technique~\citep{DBLP:series/synthesis/2011Gal} over $P_{S}$ and $P_{T}$. 
%While more elaborated sampling strategies~\citep{DBLP:journals/tois/CallanC01} might be adapted for this problem in future work, 
We observed in the experiments that using this ``focused'' sampling based on $S'\subset S$ provides sufficient information to find $P_{ST}$, even using a small sample of 1\%,   resulting in good time performance. 

\subsection{Constructing Attribute Components}
Next, we use information in the instance representation of source instance $s$ and the computed comparable attributes $P_{ST}$ to construct a discriminative query called attribute component, which finds matching instances in the target. The attribute components for $s$ are directly derived from attributes in the pairs $P_{ST}$, the attributes $P(s, G_S)$ belonging to the source dataset, and their values. For every $p_S \in P(s,G_S)$, we find in $P_{ST}$ the attribute $p_T$ that is comparable to $p_S$. If exists, we construct the component $(x).(x,p_T,o_S)$, where $x$ is a variable, $p_T$ and $o_S$ are constants, and $o_S \in O(s, p_S, G_S)$. Alg. \ref{alg:buildattributequeries} describes this procedure.
%That is, we do not use $p^s$ but the ``target'' predicate corresponding to $p^s$ as key to find candidates that share the same key value $o^s$. 
\begin{algorithm}[]
\caption{AttributeComponentQueries($S$, $G_S$, $G_T$).}
\begin{algorithmic}
%\scriptsize\tt
\STATE  $P_{S}$  $\leftarrow$ FindSourceDiscriminatePredicates($S$, $G_S$)
\STATE  $P_{T}$  $\leftarrow$ FindTargetDiscriminatePredicates($S$, $G_S$, $G_T$)
\STATE  $P_{ST}$  $\leftarrow$ Align($P_{S}$, $P_{T}$) 
\STATE  $Q_S \leftarrow \emptyset$
\FORALL {$s \in S$}
\STATE  $Q_s \leftarrow \emptyset$
\FORALL {$<p_S, p_T> \in P_{ST} \wedge p_S \in P(s, G_s)$}
\FORALL {$o_S    \in O(s, p_S,G_S)$}
\STATE  $Q_s \leftarrow Q_s \cup (x).(x, p_T, o_S)$
\ENDFOR
\ENDFOR
\STATE  $Q_S \leftarrow Q_S \cup Q_s$
\ENDFOR
\RETURN $Q_S$ 
\end{algorithmic}
\label{alg:buildattributequeries}
\end{algorithm}

As an example, we have the source attributes $P_{S} = \{label,title\}$, which match all the attributes in $P_{T} =\{drugname, synonym\}$
%; therefore, there are four possible predicate alignments that could be exploited to find the match between the source and target instances, namely: 
such that $P_{ST} =\{\langle label, drugname \rangle$, $\langle label,synonym \rangle$, $\langle title,drugname \rangle$, $\langle title,synonym \rangle\}$. 
%Those pairs in  $U^{Sider-drugname}_A$ form four possible template queries, assuming  only one type query (e.g. EXACT). However, 
For the instance $12312$, we have two attributes, $P(12312,Sider) = \{label,type\}$. The attributes comparable to $label$ are the attributes $drugname$ and $synonym$, and the value of the attribute $label$ is $``Morphine"$; thus as attribute components, we have $(x).(x,drugname,``Morphine")$ and $(x).(x,synonym,``Morphine" )$. 
 
Without loss of generality and mainly for presentation purpose, we assume in this chapter $|O(s,p_S,G_S)|=1$. Then, the maximal number of attribute components that is formed for $s$, hence the space complexity of the procedure above, is bounded by $|P_{ST}| \times |M| \times |S|$, where $M$ is the set of all query types (e.g., EXACT, OR). 
The number of queries for an instance $s$ is equal to $|P_{ST}| \times |M|$ if $\forall \langle p_S, p_T \rangle \in P_{ST}, p_S \in P(s,G_S)$.  


\subsection{Learning Class Components}
%While attribute components could be used in isolation, they may retrieve a large number of triples. 
Given the class of interest $Z$, a class component is used in addition to an attribute component to prune candidates that do not belong to $Z$. It is of the form $(x).(x,p_T,o_T)$ where $x$ is the only variable and $p_T$ and $o_T$ are constants derived from attributes and values in the target dataset. 
%, focusing only on triples of the correct class of interest. 
%As examples, we describe two class components: (
For instance, the class component $(x).(x,type,Drug)$ selects instances of type $Drug$. Combining this class component to an attribute query, e.g.\ $(x).(x,drugname,``Morphine" ).(x,type,Drug)$, would retrieve instances of the type $Drug$ with $drugname$ ``Morphine". Class components might be constructed using attributes other than $type$, e.g.\ $(x).(x, \mathit{affectedOrganism}, ``Mammals")$ selects the ``class'' of all those instances that have an attribute $\mathit{affectedOrganism}$ with the value ``Mammals''.  

%Learning class components leverages the fact that in some cases, candidate selection is performed in combination with a matcher, which is used to refine the candidates. The output of this matcher serves as training data. The intuition we leverage to learn such a component is as follows. Being descriptive for an entire class, the predicate value pairs that can be used as class components must be present in many instances. Thus, we propose to leverage the candidates retrieved during the process, particularly the positive matches according to the matcher's output, to identify those predicate value pairs that describe all the positive matches. 
%Further, we are interested in the class of entities in the target, $C^t$, which corresponds to the class of interest in the source, $C^s$. The candidate entities, particularly positive matches, we retrieved for the source instances belonging to the class $C^s$ can be seen as an data-level description of $C^t$. 
%leverage these entities to find 
While attribute components are constructed prior to the actual candidate selection process, class components are learned during the process. This is because we infer the latter after a few sets of candidates have been retrieved, i.e.\ at a specific iteration $i$, we treat the candidate sets generated during all iterations previous to $i$ as training data. This is the Query Refinement step illustrated in Fig. \ref{fig:template}. 

More precisely, an instance matcher is used to determine positive and negative matches among these candidates, which then serve as training examples. From these positive and negative examples, the best class components are computed, which are attribute value pairs that occur in all positive matches but in no negative ones. That is, class components $(x).(x, p_T, o_T)$ are constructed if $\forall r \in \mu(x), r \in \Delta^+$ and $r \not \in \Delta^-$, where $\Delta^+$ and $\Delta^-$ are the two sets of positive and negative matches, respectively, and $\mu(x)$ denotes the result bindings obtained for $x$ when executing the class component as a query. For example, for $\Delta^+=\{DB00295, DB00494\}$ and $\Delta^-=\{DB00001\}$, a class component constructed is the query $(x).(x,type,Drug)$ because, both $DB00295$ and $DB00494$ are results of that query while $DB00001$ is not. 

Given the finite sets $\Delta^+$ and $\Delta^-$, to build class components, we obtain the set of attribute value pairs $\Lambda^+ = \{(p_T, o_T) | (x, p_T, o_T) \in I(x, G_T) \land x \in \Delta^+\}$ and $\Lambda^-= \{( p_T, o_T)| (x, p_T, o_T ) \in I(x, G_T) \land x \in \Delta^-\}$. Then, we compute the difference $\Lambda= \Lambda^+ - \Lambda^-$ to capture target attribute value pairs that occur in the positive matches but in no negative ones. Finally, we use a greedy set-cover based algorithm \citep{DBLP:conf/soda/CarrDKM00} to select  the minimum set $\Lambda^m \in 2^\Lambda$ that covers all positive matches in $\Delta^+$. Each $(p_T, o_T) \in \Lambda^m$ represents a class component $(x).(x, p_T, o_T)$. 

To compute $\Lambda^m$, the attributes values pairs in $\Lambda$ are  ordered by their decreasing frequency in the positive matches in $\Delta^+$. Then, the first element in this sorted list is removed from $\Lambda$, added to $\Lambda^m$ and the positive matches that it covers are removed from $\Delta^+$. This process is repeated continuously until $\Delta^+=\emptyset$ or all elements in $\Lambda$ are added to $\Lambda^m$. Particularly, we only consider elements in $\Lambda$ that occur in more than one positive match, which avoids generating a class component that occurs only for very little percentage of  instances. In practice, this reduces $\Lambda$ to a smaller set of attribute values (a few dozens); consequently, allowing $\Lambda^m$ to be computed quite efficiently. This computation is bound to $O(|\Lambda|\times|\Delta^+|)$ because  we need to pass through all $\Delta^+$ matches to compute the frequency of each element in $\Lambda$.

The worst case number of class components is $|\Lambda^m|=|\Lambda|$. However, the number of actual components that cover all examples is usually much smaller, i.e.\ $|\Lambda^m| \ll |\Lambda|$,  limiting the number of queries that have to be issued. In the current example, this algorithm would select for $\Lambda^m$  only one out of the seven attribute pairs in $\Lambda$, i.e., $\Lambda^m=\{(type,Drug)\}$.
%\todo{what is the complexity of this, and the complexity of the whole procedure for constructing class components?} 




