\begin{center}
\begin{table*}[h]
\centering
\scriptsize\tt
\caption{Sonda F1-measure (between precision and recall) compared to ExampleDriven and other tools that participate on the OAEI 2011 benchmark.} 
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Dataset  &  SondaA  &  SondaC & KnoFuss+GA & AggreementMaker & SERIMI & Zhishi.links & ExampleDriven\\ \hline
DBPedia - Geo. & 0.63 & 0.63  & 0.89 & 0.69 & 0.68 & 0.92 & 0 \\ \hline
DBPedia - Corp. & 0.91 & 0.91 & 0.92 & 0.74 & 0.88 & 0.91 & 0\\ \hline
DBPedia - People & 0.96 & 0.96 & 0.97 & 0.88 & 0.94 & 0.97 & 0\\ \hline
Freebase - Geo. & 0.90 & 0.90 & 0.93 & 0.85 & 0.91 & 0.88 & 0\\ \hline
Freebase - Corp. & 0.87 & 0.87 & 0.92 & 0.80 & 0.91 & 0.87 & 0\\ \hline
Freebase - People & 0.96 &  0.96 & 0.95 & 0.96 & 0.92 & 0.93 & 0\\ \hline
Geonames & 0.63 & 0.63  & 0.90 & 0.85 & 0.80 & 0.91 & 0\\ \hline
Average & 0 & 0  & 0.93 & 0.85 & 0.89 &  0.92 & 0\\ \hline											 
\end{tabular}  
\end{table*} 
\end{center}

\begin{center}
\begin{table*}[h]
\centering
\scriptsize\tt
\caption{Sonda  F1-measure (between precision and recall) compared ExampleDriven and  other tools that participate on the OAEI 2010 benchmark.} 
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Dataset &  $Sonda_A$ & $Sonda_C$& SERIMI & ObjectCoref & Rimon & ExampleDriven \\ \hline
Sider-Dailymed & 0.63 & 0.61   & \textbf{0.66} & - & 0.62  & 0\\ \hline
Sider-Diseasome & \textbf{0.90} & 0.90   & 0.87 & - & 0.45 & 0\\ \hline
Sider-Drugbank & 0.93 & 0.93  & \textbf{0.97} & - & 0.50  & 0\\ \hline
Sider-TCM & 0.92 & 0.92  & \textbf{0.97} & - & 0.79 &  0\\ \hline
Dailymed-Sider & 0.93  &\textbf{0.94} & 0.67 & 0.70 & 0.62  & 0\\ \hline
Drugbank-Sider & \textbf{0.80}  & \textbf{0.80} & 0.48 & 0.46 & -  & 0\\ \hline
Diseasome-Sider & \textbf{0.95} & \textbf{0.95}  & 0.87 & 0.74 & -  & 0\\ \hline
Person11-Person12 & 0.95 & 0.95  & \textbf{1.00} & 0.99 & \textbf{1.00}  & 0\\ \hline
Person21-Person22 & 0.45 & 0.45  & 0.46 & 0.95 & \textbf{0.97} & 0\\ \hline
Restaurant1-Restaurant2 & \textbf{0.98} & \textbf{0.98}  & 0.77 & 0.81  & 0.88 & 0\\ \hline
Average\footnote{Average of all datasets} &  		\textbf{0.84}	& \textbf{0.84}	& 0.77 & - & - &0						 	\\ \hline								 
Average\footnote{Average of datasets evaluated by Objecoref} &  	0.84 &	\textbf{0.85}	&0.71&	0.78&-&0\\ \hline
Average\footnote{Average of datasets evaluated by Rimon} &\textbf{0.84}	&\textbf{0.84}	&0.80	&	-&0.73 & 0\\ \hline
\end{tabular}  
\end{table*} 
\end{center}



\section{Evaluation On Instance Matching}
In this section, we discuss how we evaluate Sonda on the context of instances matching.  Mainly, we apply two matcher overs the candidates sets produced by Sonda. Then, we compared the results of the match with state-of-the-art instance matching approaches that participate on the OAEI 2010 and 2011 challenge. Also, the ExampleDriven approach was evaluated over the same datasets. As evaluation metrics, we used standard precision, recall and F1 metrics. 

\subsection{Instance Matching Results} 

In this section, we compared the Sonda F1-measure (between precision and recall) with the other alternatives approaches that were evaluated over the same datasets.
%Notice that although not ideal, Sonda completed the task in a reasonable time, considering that our approach queried directly the datasets SPARQL endpoints. 
%The drawback of this approach is that there is a huge time delay due to access to disc, packing of the data in the SPARQL protocol and transferring it via the network. In the other hand, we can integrate two alive data endpoint without any configuration issues, parameter tuning, data pre-processing and indexing, in a couple of minutes or in a few hours. 

%In summary, Sonda yields better F1 score in 96\% of the cases and also, . It means Sonda is more effective in selecting the candidates compared to other approaches, and the main reason for that is attributed to the use of multiple query types and the smart selection of the queries by the branch-and-bound algorithm.

