\subsection{SERIMI vs. Alternative Approaches} 
We compared SERIMI with state-of-the-art approaches. We carefully selected in the literature   systems that reported the best performance in the benchmark that they participated. Those systems are very representative and account for a large number of approaches used for instance matching.

We compared SERIMI with \emph{RIMOM} and \emph{ObjectCoref2010} (OC2010) using the data and results of OAEI 2010~\cite{Euzenat10}. To ensure the validity of this evaluation, we also included recently published results for ObjectCoref ~\cite{DBLP:conf/www/HuCQ11}, called ObjectCoref2012 (OC2012).  
Using OAEI 2011 data and published results~\cite{DBLP:conf/semweb/EuzenatFHHMNRSSSST11}, we compared SERIMI with \emph{AgreementMaker} (AM) and \emph{Zhishi.links} (Zhi). 
%we compared SERIMI with KnoFuss+GA, using the results recently published by KnoFuss+GA\cite{DBLP:conf/aswc/NikolovUMR08} at ESWC 2012. 
Using the same data, we also compared SERIMI with the latest state-of-the-art approaches for instance matching, which did not participate at OAEI: \emph{PARIS}~\cite{DBLP:journals/pvldb/SuchanekAS11} and \emph{SIFI-Hill}~\cite{DBLP:journals/pvldb/WangLYF11}.  

\textbf{OAEI 2010.} Table \ref{table:oaei2010oc} shows results for OAEI 2010. Missing values in the table indicates that the results were not published by the authors at OAEI. On average, SERIMI   largely outperformed both systems. 
As shown in Table \ref{table:oaei2010oc}, SERIMI (93\% F1) largely outperformed RIMON (72\% F1) on average. SERIMI  achieved considerable performance gain for the life science collection. Here, class-based matching played an important role because source and target instances often belong to different classes. In Sider-Dailymed for instance, there were instances of the types Drug and Ingredient sharing the same name that were incorrectly identified as candidate matches; these false positives were   rejected by SERIMI thanks to class-based matching. 

SERIMI was outperformed by OC2010 and RIMON in the Person collection. One reason is that this data involves artificially generated spelling mistakes. OC2010 and RIMON employed special direct matching strategies to deal with that. More importantly, SERIMI could not yield better results  because class-based matching has limited impact when all candidates belong to the same class and the data schema is well-defined. In this scenario, all instances belong to the class Person and the source and target schema completely overlap. Thus, instances did not greatly vary in terms of class related information. %The best SERIMI performance in the other cases is attributed to candidate set generation process which produced candidate sets with high recall and precision. The effort to disambiguate candidate in those cases were minimal.
 
Also compared to OC2012, which only published results for the easiest matching tasks, SERIMI achieved better average performance (97\% F1). 


\begin{center}
\begin{table}[h]
\scriptsize
%\small
\centering
\caption{F1 performance for SERIMI, OC2010, RIMON, OC2012 over OAEI 2010 data; some results were not available for OC2010, RIMON OC2012.} 
\begin{tabular}{ | c | c  | c | c | c | } 
\hline
Datasets & SERIMI  & OC2010 & RIMON & OC2012\\
\hline
Sider-Dailymed & \textbf{0.74} & - &  0.62 & -  \\
Sider-Diseasome & \textbf{0.89} & - &  0.45 & -  \\
Sider-Drugbank & \textbf{0.98} & - & 0.50& -  \\
Sider-Tcm & \textbf{0.99} & - & 0.79  & - \\
Dailymed-Sider & \textbf{1.0}  & 0.70 & 0.62 & - \\
Diseasome-Sider & \textbf{0.97} & 0.74 & - & - \\
Drugbank-Sider & \textbf{1.0} & 0.46 & - & - \\
Person11-Person12 & 0.95 &  \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
Person21-Person22 & 0.91 &   0.95  & \textbf{0.97} & 0.95\\
Restaurant1-Rest.2 & \textbf{0.97} &   0.73 &  0.81 & 0.89\\
\hline 
Average (OC2010)  & \textbf{0.97}	&	0.76 & - & - \\
Average (RIMON)  & \textbf{0.93}	&	-  & 0.72 & -\\
Average (OC2012)  & \textbf{0.97}	&	- & -  & 0.95 \\
\hline 
\end{tabular}  
\label{table:oaei2010oc}
\end{table}  
\end{center} 


\textbf{OAEI 2011} As shown in Table \ref{table:oaei2012table}, 
%SERIMI had a competitive average performance (91\%  F1) compared to KnoFuss+GA (93\% F1). 
SERIMI had the same average performance as Zhi. In particular, Zhi performed better in tasks involving the location datasets (DB-Geo and GeoNAMES) because as opposed to SERIMI, it made use of domain knowledge and location-specific similarity functions.  %In general, SERIMI achieve high F1 for most of the cases.
% In general, the location datasets where critical to SERIMI because the candidate sets where highly ambiguous and the class-disambiguation could not find a unique class of interest with certainty. Beside of that, the algorithm that produced the candidate set did not retrieve 100\% recall; for instance, the 82\% F1 in the DB-Geo where achieve for candidate sets with 88\% recall. It indicates that a better candidate set algorithm could help SERIMI to achieve higher F1.
%The results verify our original assumption that the class-based disambiguation combined with direct matching increases the accuracy of the matches.
SERIMI largely outperformed SIFI (91\% vs. 82\%). SIFI had slightly better performance than SERIMI for Nyt-DB-Per and Nyt-Freebase-Per. With these tasks, SIFI was able to obtain more fine-tuned thresholds, which led to better results. As opposed to SIFI, which relies on training data for this threshold tuning, SERIMI is completely unsupervised. For SIFI, we used 10\% of the OAEI ground truth as positive examples, and 10\% of wrong alignments in the candidate sets as negative examples.
%We fixed a threshold for all cases in our direct matching. 
PARIS obtained average performance of 47\% F1, which was considerably worse than SERIMI. PARIS used both schema- and data-level features for matching. However, it only employed exact matching, i.e. it considers instances as matches when their features exactly match. In PARIS's authors experiments, good results could be achieved because exact matching was sufficient for the tasks involved. With the tasks studied here, exact matching led to very low performance. 
%Moreover, to change the exact matching to another approximate matching is not possible without considerably re-modelling PARIS. 

Overall, the results show that SERIMI achieved the best accuracy results. Further, there is room for improvement  as SERIMI so far neither uses training data nor exploits domain knowledge. Training data, for instance, could be exploited to fine tune the threshold (as implemented by SIFI) or to train a more optimized combination of direct matching and class-based matching. 

%The results proof that SERIMI approach is effective for solving instance matching over heterogeneous data.

%\subsection{On Precision, Recall and F1}
%\todo{I propose to eliminate this subsection.}
%So far, we focus our discussion on F1. In Figure \todo{}, we show average precision, recall and F1 for different configurations of SERIMI and systems....

%The class based matching (S+SR) was able to disambiguate the right cases with 97 \% precision. However, when combined with direct match approach (S+SR+DM) its precision decreased to 72\%, and its recall increased from 74\% to 76 \%.


