\section{Evaluation}
%In this section, we study the performance of Sonda, the proposed approach, by 
There exist no solutions that can be directly applied to our on-the-fly integration problem. To compare Sonda, we designed two best-effort non-trivial baselines. They are based on S-based and S-agnostic, two recent candidate selection approaches we have adapted to the on-the-fly setting. Although we will refer to those baselines as S-based and S-agnostic, improvements reported in this paper do not refer to the original systems (which cannot be directly compared to Sonda) but the baselines.

Summarizing the experiments discussed in detail below, Sonda took 662.34s (2204.75s, with real Web endpoints) and achieved an average effectiveness of 85\%, measured by F1. The best baseline, S-based, took 843.87s (4904.75 s, Web endpoints) and achieved 73.47\% F1. 
%Also, in combination with a matcher (SERIMI) that does not use additional knowledge related to the matching tasks and hand coded rules, as the other systems do, we show that Sonda provides good performance when used as a candidate selection strategy for instance matching. 
To consider the effect of candidate selection on instance matching, we run SERIMI on top of Sonda's candidates and compared the results with those reported for the OAEI benchmark. As an average over all datasets, Sonda+SERIMI was the best system, and it resulted in 13\% F1 improvement over SERIMI, indicating that Sonda effectively preserved the correct candidates and also reduced ambiguity (incorrect candidates), helping the matcher to achieve higher quality results. 

%We compared Sonda against S-based and S-agnostic, two recent candidate selection approaches we have adapted to the on-the-fly setting. Summarizing the experiments discussed in detail below, Sonda took 662.34s (2204.75s, with real Web endpoints) and achieved an average effectiveness of 85\%, measured by F1. The best baseline, S-based, took 843.87s (4904.75 s, Web endpoints) and achieved 73.47\% F1. 
%To consider the effect of candidate selection on instance matching, we run SERIMI on top of Sonda's candidates and compared the results with those reported for the OAEI benchmark. As an average over all datasets, Sonda+SERIMI was the best system, and it resulted in 13\% F1 improvement over SERIMI, indicating that Sonda effectively preserved the correct candidates and also reduced ambiguity (incorrect candidates), helping the matcher to achieve higher quality results. 

  
\textbf{Datasets.} We relied on the datasets and ground truth published by OAEI~\cite{DBLP:journals/jods/EuzenatMSSS11}. We used the life science (LS) collection (which includes Sider, Drugbank, Dailymed TCM, and Diseasome) and the Person-Restaurant (PR) from the 2010 collection and all datasets from the 2011 collection. The matching tasks 
%used in this experiment are as defined in OAEI 2010 and OAEI 2010. They 
are cross-dataset tasks, which always involve a pair of datasets. One is the source while the other is treated as the target. 
%\todo{maybe, say that not all instances from these datasets are used but the matching tasks involved entities of exactly one class,  hence, is inline with our on-the-fly assumption of having a class of interest as inputs.}

\textbf{Metrics.} For assessing candidate selection results, we employed standard metrics, namely Reduction Ratio (RR), Pair-wise Completeness (PC) and F1. Basically, high RR means that the candidate selection algorithm helps to focus on a smaller number of candidates, while high PC means that it preserves more of the correct candidates. 
%PC is defined as PC=\#Correctly Computed Candidates/\#Ground Truth Candidates. 
More precisely, RR captures the reduction in the number of all possible candidate pairs that have to be considered for matching. A normalized version of RR can be used, when the number of all possible candidate pairs is large~\cite{DBLP:conf/semweb/SongH11}. We also use normalization, where instead of considering the reduction in the number of candidate pairs, we consider the reduction in the number of candidates. 
% (that have to be considered for every source instance). 
%, i.e. RR = (\#Instances with Non-Empty Candidate Sets)/(\#All Computed Candidates).
%\[
%\scriptsize\tt
% PC = \frac{\text{ \#Correctly Computed Candidates}} { \text{ \#Ground Truth Candidates}}
%\] 
 %\[
% \scriptsize\tt
% RR = \frac{\text{\#Instances with Non-Empty Candidate Sets} }{ \text{\#All Computed Candidates} }
%\]
%\[
%\scriptsize\tt
%F1 = \frac{2 * RR * PC}{ RR + PC}
%\]
Beside these metrics, we also count the average number of queries evaluated per instance as well as the time needed. 
%for accomplishing the task of learning the queries and executing them.  to retrieve the candidate sets. 
For assessing the instance matching results, we used the standard metrics Precision, Recall and F1. 
 
\textbf{Systems.}  
Our system Sonda and the results of this experiments are available for download\footnote{https://github.com/samuraraujo/Sonda} at GitHub. It was implemented in Ruby and the queries were implemented as SPARQL queries issued over remote SPARQL endpoints. 
% under a controlled environment. 
 For the OAEI datasets, we could find a SPARQL endpoint\footnote{http://dbpedia.org/sparql} on the Web, which serves DBpedia (the largest out of all given datasets). This endpoint runs the OpenLink Virtuoso Universal Server version 06.04.3132 on Linux, using 4 server processes. For all other datasets, we employed the OpenLink Virtuoso Universal Server Version 6.1.5.3127 as a SPARQL endpoint, and run it on a server in our controlled environment with Intel Core 2 Duo, 2.4 GHz, 4 GB RAM, using a FUJITSU MHZ2250BH FFS G1 248 GB hard disk. 
We load these datasets into Virtuoso, creating the default S-P-O index and an inverted index as supported by Virtuoso that was used to support LIKE, AND and OR queries. 
%We used Virtuoso because it is a robust RDF store deployed in many alive and relevant datasets on the Linked Data (e.g., DBPedia, LOD).
%The experiments were executed in this same machine. 
We used two configurations. In the \emph{Web} configuration, the DBpedia endpoint on the Web is used while in the \emph{controlled} one, all datasets are managed using the our controlled endpoint. The presented values are averages over five runs. 
%for literal values using the following commands:
%\lstset{basicstyle=\small}
%\begin{lstlisting}[ ]   
%DB.DBA.RDF_OBJ_FT_RULE_ADD 
%(null, null, 'index_local');
%DB.DBA.VT_INC_INDEX_DB_DBA_RDF_OBJ (); 
%\end{lstlisting}
%
For performance reasons, remote data endpoints stop processing according to a manually set query timeout. The one we used supports a query limit. A \emph{query limit} of 100 for instance indicates that the endpoint should stop processing after 100 number of results have been retrieved. 
%Beside of that, the query limit avoids too many instances to be retrieved for non-discriminative queries, which produce imprecise candidate sets. 
%Retrieving more results has both an impact on the quality of the result as well as time performance. We varied the settings for query limit in one setting to study the tradeoff of processing cost and result quality. 
%
%\subsection{Approaches} 
To evaluate the effect of \emph{class components}, we considered two versions of Sonda, namely without (Sonda-A) and with class components (Sonda-C). 

For comparison, we modified the \emph{S-agnostic}~\cite{papadakis_efficient_2011} and \emph{S-based}~\cite{DBLP:conf/semweb/SongH11} approaches by translating their schemes to queries that are processed against endpoints. S-agnostic's scheme consists of all value tokens while S-based uses values of discriminative attributes. Accordingly, OR queries are created to consider all value tokens used by S-agnostic. These queries are executed sequentially and their results are aggregated to produce the candidate set. As discussed, our approach and S-based use discriminative and comparable attributes for candidate selection. To achieve this in the on-the-fly setting, we use the sampling procedure presented in Section 3.1 for both approaches. 
%This method is also used for S-based, and the results are translated to OR queries in the same way as attribute components are constructed from comparable attributes as discussed in Section 3.2. 
Further, S-based applies an additional similarity function to further prune incorrect candidates retrieved from these queries. For comparison purposes, we apply this strategy to all approaches, using the same similarity function. 

In summary, S-agnostic uses only value tokens while S-based additionally, employs attributes (focusing on discriminative ones). Sonda-A extends S-based, considering 4 more query types and furthermore, implements the heuristic-based search optimization. Sonda-C extends Sonda-A with class components. 
%. We evaluated our approach with all functionalities that we described before (including the predictor , sorting by time, etc.).

%\todo{highlight the best value, and finally, compute statistical significant improvements in terms of F1 over second best; using pair t-test with confidence <0.05}
\input{sec-table}

\subsection{Candidate Selection Results} 
Table 1 shows an overview of the results. Compared to the baseline approaches over all 18 matching tasks, Sonda-A and Sonda-C could improve F1 score in 16 and 17 of the tasks, respectively. Average F1 values for  Sonda-A, Sonda-C, S-based and S-agnostic are 80.92 and 85.03, 73.47 and 64.22, respectively. This translates to a 14\% improvement that Sonda-C could achieve over the best baseline, S-based. Average time performance of Sonda-A, Sonda-C,  S-based and S-agnostic are 624.21s, 662.34s, 843.87s and 967.27s, respectively. Thus, Sonda-A and Sonda-C were 34\% and 22\% faster than the fastest baseline, S-based, respectively. Since higher quality results often require more processing time, we also look at time performance results in the light of result quality. In particular, we look at the matching tasks for which the result quality was comparable among the systems (when differences in PC and F1 were $<5\%$). 
For these tasks,
% tasks (DAILYMED-SIDER, DISEASOME-SIDER, DRUGBANK-SIDER, RESTAURANT1-RESTAURANT2, SIDER-DISEASOME, SIDER-DRUGBANK, SIDER-TCM  ), average time performance of Sonda-A, Sonda-C, S-based and S-agnostic,  are 110.83(s), 113.02(s), 317.79(s) and 204.07(s). 
%Thus, when producing results with same quality (w.r.t. PC and F1), 
Sonda-A and Sonda-C were more than 45\% faster than the fastest baseline, S-agnostic.

\textbf{Task Complexity.} Differences in F1 values obtained for different tasks indicate their varying levels of complexity. Sonda-A and Sonda-C consistently outperformed the baselines over all tasks (with one exception, task 16, where results were comparable). %For a large percentage of the tasks, Sonda-C yielded F1 values $>0.9$. 

Large improvements could be achieved for tasks 4-8, especially the two tasks that involved DBpedia. These tasks involve large datasets and thus, capture a larger amount of possible candidates that have to be considered for every instance. Sonda was more effective in dealing with this ambiguity. In particular, it was more effective both in finding the correct candidates and reducing the number of candidates as indicated by average PC and RR, respectively. Higher PC could be achieved because more query types were considered, thus incorporating a larger space of candidates. This however, does not come at the expense of RR. While S-based and S-agnostic use all their query results as candidates, Sonda selectively chooses the best queries and utilizes only their results as the candidate set. 
%In fact, compared to the ``using all'' strategy, the ``using only the best'' heuristic may lead to lower PC (yields less results) and improve RR. 

%Mainly, those approaches performed worse because they considered only OR queries (that are too inclusive and penalize RR),  and as a query limit was used in all queries, it also decrease the PC, due to the correct matches were out of the sets defined by those limits. As S-agnostic has the most inclusive type of queries, it had the worse performance (F1) in those cases, penalizing even more PC.


There are 4 problematic tasks where F1 values were $<0.7$ (tasks 5, 9, 11 and 13). Particularly difficult was task 9, which involves Geo Names. This dataset contains many instances with the same labels with only few additional information to disambiguate them. The strategies used by OAEI matching systems to deal with this task is to manually encode and exploit geo- and location-specific knowledge in the form of rules (which were not used by our systems). Task 11 involves an artificial dataset where syntax mistakes where added to produce string level ambiguity. Sonda's PC values (58.47 and 59.32, respectively) were lower than those achieved by the baselines. Here we can clearly see the strategy of aggregating all queries results works well, while the heuristics used by Sonda's to choose only the best ones may compromise PC. However, it has a positive effect on RR, resulting in higher F1 also for this task.   

\textbf{Attribute Components.} 
%The effect of the attribute component can be seen in differences between Sonda and S-agnostic.   
%, which does not use attributes but simply tokens. The former achieved the highest F1 in only two cases (13,16).  In the case 13, the improvement in F1 is due to the PC; because queries used in S-agnostic was more inclusive, consequently collecting more positive candidates than Sonda. However, 
On average, systems using attribute components, Sonda-C, Sonda-A and S-based, are more effective than S-agnostic, which dismisses attribute information and used value tokens only. In terms of F1, their values are 85.03, 80.92 and 73.47, respectively, compared to 64.22. 

\textbf{Class Components.} The effect of the class component can be seen in differences between Sonda-C and Sonda-A. The former achieved a higher RR higher and comparable PC, thus indicating the class component has the positive effect of reducing the number of incorrect candidates. Especially for tasks 4 and 5 that involve DBpedia, improvements in RR were large (from 47.6 to 87.63 and 22.76 to 62.52, respectively). The class component has a stronger effect here because this dataset simply captures more candidate results, thus there is potentially also a higher number of incorrect results that could be pruned. 
%The class component solve better than the other approaches the class level ambiguity (many instances with the same name but from different classes) in those datasets. It indicates that the class component indeed make the queries more precise because it select exactly the correct class of target instances.  
%In the case NYTIMES-FREEBASE(PER) the class component slightly reduce the F1 (1\%) in Sonda-C; but, in average, Sonda-C's F1 was 5\% higher than  Sonda-A. 
%However, an increase in processing time could also be observed because a higher number of components were considered (2.16 and 1.56 times slower for tasks 4 and 15, respectively). 

\textbf{Processing Cost}. As captured by Table 1, overall processing cost can be decomposed into learning and execution times. While learning is essential to produce the queries capturing different candidates, execution is needed to retrieve them. Thus, both steps are crucial for result quality. However, while the baselines execute all the learned queries, Sonda does not. This has a large impact on performance as we can see that learning is relatively cheap, which makes up only 7\% of total time, on average. Due to the sampling we performed, the number of queries needed to retrieve data during learning is substantially smaller. 
%The learning phase only performs 1\% of the number queries performed in the execution phase, as discussed in the Sec.3.2.1. 
%Thanks to the optimization that selectively choose queries, 
Although Sonda-A and Sonda-C were 4.7x and 3.8x slower than S-agnostic during learning, the fastest approach that simply maps value tokens to queries, they were 41\% and 36\% faster than S-agnostic when considering the whole process. Thus, the results shows that although Sonda invested more time to learn the queries (which are needed to achieve the better results), the optimization could reduce the time in executing the queries. While we focus the discussion on times achieved for the controlled configuration because values were more stable, Table 1 also shows the differences between the controlled and Web configurations. Compared to its controlled version, Sonda-A, Sonda-C, S-based and S-agnostic were 3.6x, 3.3x, 5.8x and 1.9x slower, respectively. This suggests that delays caused by the external DBpedia endpoint have the largest negative impact on S-based. Accordingly, the performance improvement Sonda could achieve over S-based is larger in the Web setting. S-agnostic yields best time performance here because as opposed to the other systems, it does not require learning and thus, does not have retrieve data samples from the Web endpoints. 

%\textbf{Conclusion}. Sonda-C achieved the best trade-off between F1 and processing cost, indicating that it was time efficient (successful in selecting the optimal queries) as well as effective in selecting the correct matches. Sonda-A can be seen as an alternative to Sonda-C whenever no matcher is available to generate examples for building class components. Notice that a matcher is the one used together to generate the class component, but not to refined the candidates.
%, which is necessary to obtain class clauses.



\textbf{Number of Queries}. This connection between cost and quality can be more clearly seen in the number of queries and F1. 
%) is related to the number of queries considered, i.e., the different type of queries and components considered. 
Sonda-C, Sonda-A,  S-based and S-agnostic considered (during learning) on average 31.39, 19.72, 3.94 and 5.06 queries, which translates to F1 values of 85.03, 80.92, 73.47 and 64.22, respectively. 
%Their lower performance is attribute to the type of query used. In both cases only OR queries were used. We can see that 
In all cases, Sonda achieves a considerable reduction in the number of queries evaluated per instance (during execution). In some cases (e.g. tasks 10, 14 and 18), it performed close to one query per instance.
% Notice that we can not compare the predictor on S-based and S-agnostic, because in both cases, the number of queries executed per instance was fixed. 
%The purpose here is to show  that Sonda's predictor and branching policy was very efficient, selecting only a few queries per instance, as well as, very effective, selecting queries that produce near optimal PC, in many cases. In a few cases (11 and 13), Sonda evaluated a larger number of queries per instances, which indicates that there is a lot of room for improvement w.r.t the number of queries evaluated.

\begin{figure} [h]
\vspace{-1pt}
\centering
\includegraphics[scale=0.42]{g1.pdf}
\caption{F1 for Sonda-A, S-agnostic and S-based for query limits 10, 30, 50 and 100.} 
\vspace{-10pt}
\label{fig:limitsagnostic}
\end{figure} 

  \begin{figure} [h]
\vspace{-1pt}
\centering
\includegraphics[scale=0.42]{g2.pdf}
\caption{Execution time for Sonda-A, S-agnostic and S-based for query limits 10, 30, 50 and 100.} 
\vspace{-1pt}
\label{fig:limitsbased}
\end{figure} 

\textbf{Number of Results}. Also, quality (F1) is related to the number of results retrieved by the queries. 
%For the cases where the number of results were controlled by a query limit, we investigate the 
Fig.\ \ref{fig:limitsagnostic} and Fig.\ \ref{fig:limitsbased} show the effect of query limit on Sonda-A, S-agnostic and S-based. Four query limits were used: 10, 30, 50, 100 elements. 
%The purpose of this evaluation is to show that, in average, 
We can see that only F1 values for S-based improved consistently with increasing query limit, while execution times for both S-based and S-agnostic were higher with increasing query limit. 
This effect on time however, could not be observed for Sonda-A because due to the optimization, a small value for query limit sometimes resulted in a greater number of queries that have to be executed. We observed that while PC consistently improved with increasing limit (more results are incorporated), RR sometimes got worse with increase limit (because more results also include more negative matches). Thus, increasing query limit has a mixed impact on F1, while for the baselines, it unambiguously resulted in higher processing cost. 
%The use of limit is necessary to constraint the queries to retrieve results in a acceptable amount of time.  For all other results in this paper we consider the query limit equal to 30, because most of the datasets evaluated in this work the queries retrieved less than 30 elements.

 \textbf{Query Types}. Fig.\ \ref{fig:frequency} shows for each task the percentage of query types executed to find the optimal candidate set. It illustrates that to produce non-empty candidate sets, all query types were considered useful by Sonda-A. 
 %Although EXACT queries were the fastest, they were less useful on average. 
% , while it takes 0.04(s), EXACT\_LANG, LIKE, AND and OR, take 0.06(s), 0.11(s), 0.10(s) and 0,17(s), respectively. 
\begin{figure}[h]
\vspace{-10pt}
\centering
\includegraphics[scale=0.5]{g4.pdf}
\caption{Percentages of query types executed by Sonda-A per task.} 
\vspace{-10pt}
\label{fig:frequency}
\end{figure}
