\section{Deriving Transition Queries} 
In this section, we present our solution for efficiently generate transition queries. 

\subsection{Deriving Attribute Query Components} 

An attribute query has parameters: a pair of attributes or alignment, a source instance, a function of similarity and a threshold. In this work we assume that the function of similarity is given, as well as, its threshold.

\subsubsection{Pair of attributes or alignment}. As we described before, a list of attribute alignments between two sources can be obtained using state of the art schema alignment techniques. However, to our purpose, it only makes sense to select attributes that acts as denominator of an instance (e.g., title, label, name), which will call as \textit{relative identifiers}.Those attributes are very discriminative properties of the data and can be obtained by a simple but efficient entropy-based unsupervised method that we describe below.

\begin{definition}[Identifiers] An attribute $r_k$ is an identifier, if $\forall i,j, i \neq j$ $s_i(r_k) \neq  s_j(r_k)$. It means that all instances have a different value for this attribute.
\end{definition}  

\begin{definition}[Identifiers Distribution] Given $V_{rk}$ as the set of values of an identifier $r_k$,  the probability Pr[$v_i$]= 1/ $|V_{rk}| \forall v_i \in V_{rk}$ 
\end{definition}  

\begin{definition}[Relative Identifiers] Relative identifier are attributes where the distribution of their values is the closest to the distribution of an identifier.
\end{definition}  

As identifiers have a constant distribution, their entropy H($r_k$) is maximal and the ratio H($r_k$) / log($|V_{rk}| $)= 1. Using this property, we can now easily select attributes that are relative identifiers. For that, we only consider, from the alignment list produced by a schema alignment technique, the attributes pairs in which both attributes have an relative maximal entropy ratio. We exclude text and non-literal attributes(e.g., uris), which have high entropy ratio but are not good discriminative properties.   

The pseudo-code is described by the algorithm 1:

\begin{algorithm}
\caption{Select a list of relative identifier pairs}
\begin{algorithmic}
\STATE list = obtain attribute alignment between source and target datasets.
\STATE identifiers = $\emptyset$
\STATE max=0
\FORALL{(a,b) in list}
\STATE Ha = computes entropy ratio of the attribute a
\STATE Hb = computes entropy ratio of b
\IF{$(Ha > max$  and $Hb > max$) or (Ha is maximal or Hb is maximal)}
\STATE $identifiers \leftarrow  (a,b)$
\STATE max = ha
\ENDIF
\ENDFOR
\RETURN identifiers
\end{algorithmic}
\end{algorithm}
 
Then, we build an attribute query for each selected relative attribute pair. Notice that more than one pair may be selected. For instance, the pairs (label,title) and (phone, phonenumber). For those pairs, we would build the queries: $Q_a(s_i,label,title, \omega, \delta, type)$ and $Q_a( s_i, phone, phonenumber,  \omega, \delta, type)$. The attribute queries generated by this procedure are used as attribute query components of the transitions queries during the rest of the process. The only parameter that change from state to state in those queries is the instance $s_i$. Notice that future implementations of these framework may also vary the similarity function and threshold,  to better adapt to the characteristic of the attributes values. For instance, a specific similarity measure could be used for names, number, geographic information, etc.

\subsubsection{Query Types}. We define four types of attribute queries: Exact, Keyword, And and Or. 

Below we list an example of each query:

Exact: Select * where {?s ?p  = "Token"}

Keyword: Select * where {?s ?p like "Token"}

And: Select * where {?s ?p  = "Token1" and "Token2" }

Or:  Select * where {?s ?p  = "Token1" or "Token2" }

For each alignment we generate the four queries above.

\subsubsection{Query Pruning}
For each instance we then have N x 4 queries. Where N is the number of alignments. 

As we want to find the match as faster as possible, we start the system choosing a query randomly. 
For each instance processed we update the time that the queries took to be performed, in a stochastic fashion. It means, that If a query fails, its time is the summation of all other queries that were previous performed. We always select the fastest query at the specific moment.. 

After 10\% of the instances has been processed we remove the queries that did not retrieve any result for all the instances processed. This avoids queries that were generated by a bad alignment to be executed in vain.

As the query types are subset of each other, we only perform in the subset order.  I.e., if a query OR retrieve empty, then we don't need to try the others queries because they are subset of the OR query. For each instance, we perform all queries until we get a set with one element.

In that fashion, we performed the fastest query and we always select the smallest set of candidate sets that we can generated by those queries. 
