\subsection{Utility of the Approach} 
Finally, in this section, we investigate when it is more efficient to query all instances from the remote endpoint and to perform candidate selection locally instead of executing instance-specific queries against the remote endpoint. 
%Precisely, this condition can be expressed by the equation below:
This is the case when:
\begin{equation}
\sum_{s \in S}\sum_{q_s \in Q_s}   Time(q_s)  < \sum_{q_t \in Q_T}   Time(q_{t})
\label{eq:tradeoff_condition}
\end{equation}

where $Q_s$ is a set of instance-specific queries for source instance $s \in S$ and $Q_T$ is a set of queries that retrieve all instances in the target dataset. 
%Intuitively, when the time to retrieve all target instances is higher than the time to execute all  queries  $q_s$ for all $s \in S$, then it is more efficient to obtain the entire dataset using queries in $Q_T$ and do a local processing. 
%Practically, 
If we consider $t_S$ and $t_T$ as estimated average time of the queries in $Q_s$ and $Q_T$, respectively; then Eq.\ \ref{eq:tradeoff_condition} can be approximated by:

\begin{equation}
|S| \times |Q_s|\times  t_S      <  |Q_T| \times  t_T 
\label{eq:tradeoff_condition2}
\end{equation}

The most straightforward query that retrieves the entire content of an endpoint is the SPARQL query \textit{"select * where \{?s ?p ?o\}"}. 
%\footnote{The same reasoning can be used considering a SPARQL construct query.}. 
In this case, $|Q_T| = 1$, and consequently, the inequality in Eq.\ \ref{eq:tradeoff_condition2} would depend entirely on the time $t_T$. For relatively small dataset with a few thousand triples, this query can be reasonably fast. For large datasets, it timeouts,  because in practice endpoints impose a limit on the time spent on processing a query (or number of triples that can be retrieved by a query). 
%number of triples that can be retrieved by any query.  
Instead, the query\textit{ "select * where \{?s ?p ?o\} limit X  offset Y" }can be considered, where $X$ represents the number of instances retrieved and $Y=X\times i$ where $i=[0, \frac{|T|}{X}]$. In this case,  $|Q_T|=\frac{|T|}{X}$.

To obtain some numbers for Eg.\ \ref{eq:tradeoff_condition2} using real-world data, we selected the NYTimes dataset from the OAEI benchmark. It contains 350.000 triples in total, which was loaded using the same system configuration discussed before. We considered one attribute component and 5 query types; consequently, \ $|Q_s|=5$. We obtained $t_S=0.02s$ for these 5 query types. Assuming a limit of 1000 (i.e., $|Q_T|=\frac{350.000}{1000}=350$), we obtained $t_T=2.65s$. With these values, Eq.\ \ref{eq:tradeoff_condition2} can be reduced to $|S| < 9.275$, i.e., for this example, our method is more efficient than the alternative method of retrieving all instances, when $S$ contains less than 9.275 instances. 

%Although this value is an approximation, it is quite realistic considering that it is much lower than the real value. Specially, considering that we ignored that Sonda does not execute all 5 queries during the predicting phase for all instances; and we ignored the additional time to process the data in the approach where it locally processed. 
We note this is the the case for all source datasets in the OAEI benchmark, i.e.\ they have no more than 5000 instances. Moreover, most of the target datasets in this benchmark contains millions of triples. It is clear that it is not efficient to download millions of triples when only a few thousands of them might be relevant for the integration task. These results suggest that in real-world datasets and integration tasks, downloading all data is not always needed. Our on-the-fly matching solution that selects only necessary candidates helps to improve performance, especially when the number of instances to be matched is small. 

%are realistic and it can be encountered in real-world scenarios as well exemplified by those benchmarks. 