\section{Experimental Evaluation}

In this section, we describe our evaluation that is based on the instance-matching track of the Ontology Alignment Evaluation Initiative (OAEI). This track focuses on evaluating the effectiveness of instance-matching approaches over Web data, which is exactly the goal of the evaluation here. SERIMI was the second best system in OAEI 2011. In this paper, we do not include these 2011 results but focus on the ones we obtain for the datasets used in 2010. This is due to space limitation, and also because these 2010 datasets are more diverse in terms of heterogeneity (differences in classes of entities), which provide richer insights to the advantages and limitations of the class-based disambiguation proposed here.  

\subsection{Experiment Setting}
\textbf{Collections.} We used the life science (LS) collection (which includes DBPedia,
%\footnote{http://dbpedia.org/About}  
Sider,
%\footnote{http://www4.wiwiss.fu-berlin.de/sider/}  
Drugbank,
%\footnote{http://www4.wiwiss.fu-berlin.de/drugbank/}  
LinkedCT,%\footnote{http://data.linkedct.org/}  
Dailymed
%\footnote{http://www4.wiwiss.fu-berlin.de/dailymed/} 
% 
TCM,
%\footnote{http://code.google.com/p/junsbriefcase/wiki/RDFTCMData} 
and Diseasome) 
%\footnote{http://www4.wiwiss.fu-berlin.de/diseasome/} 
and the Person-Restaurant (PR) collection provided by this benchmark. 

\textbf{Evaluation metrics and alternative approaches.} We used precision, recall and F1 to measure the effectiveness of the proposed approach. We considered as true positives the provided reference mapping (the ground truth). False positives are the mappings found by SERIMI that do not exist in the ground truth. For comparison, we used the results of the related approaches RiMOM and ObjectCoref as reported for OAEI 2010. 
%to enable a direct and fair 
%for comparison. 
%As discussed, these two preliminary works are the main solutions for effective instance matching in this heterogeneous setting.  
%In addition, we investigated how SERIMI performs using different settings for the parameters $\delta$ and $k$. 
%This way, we assess how these parameters affect the overall performance of SERIMI. 
\subsection{Experiment Results}
%In this section we show the results of our experiments. 
Fig. 4 and Fig. 5 show SERIMI's performance as we changed $\delta$ and $k$. We observed that the standard deviation of precision and recall is close to zero in the cases where the pseudo-homonym sets are small.
%, e.g. have cardinality equal to 1 (Drugbank-Sider). 
The parameters $\delta$ and $k$ had no effect in these cases because the same (number of) instances were selected (e.g. only one) as results. Otherwise, performances varied because differences in $\delta$ (and also differences in $k$) leaded to a different selection of instances. The use of the $\delta_{m}$, an automatically computed threshold as discussed in Section 4.3, performed relatively well on average. Therefore, for all other experiments shown in this paper, we used $\delta = \delta_{m}$. 
\begin{center}
\begin{figure}[h]
 \centering
\includegraphics[width=5.5cm]{fig8.png}
\caption{Top-k F-measure}
\hspace{0.2cm} % To get a little bit of space between the figures
\includegraphics[width=5.5cm]{fig7.png}
\caption{$\delta_{threshold}$ F-measure}
\vspace{-20pt}
\end{figure}

\end{center}

\begin{center}
\begin{table*}[ht]
\centering
\caption{The precision and recall for all dataset pairs. ObjectCoref's results are not available for all pairs.} 
\scriptsize\tt
\begin{tabular}{ |c | c | c | c  | c | c | c | c | c | c | c |  c | c | c | c | c |} 
\hline
Dataset Pair /  &	\multicolumn{3}{|c|}{Sider}  &	\multicolumn{3}{|c|}{Sider} &	\multicolumn{3}{|c|}{Sider  	}  &	\multicolumn{3}{|c|}{Sider} & \multicolumn{3}{|c|}{Sider  }\\
  Approaches &	\multicolumn{3}{|c|}{DBPedia}  &	\multicolumn{3}{|c|}{Dailymed} &	\multicolumn{3}{|c|}{Diseasome  	}  &	\multicolumn{3}{|c|}{DrugBank} & \multicolumn{3}{|c|}{TCM  }\\
\hline 
	& P &	R & F1	 & P	  & R & F1	  & P & 	R	 & F1 & P  &	R & F1 & P  &	R & F1\\
\hline 
SERIMI &		0.50 &		\textbf{0.62} &	0.55 &	\textbf{0.78} &		0.58 &		0.66&	\textbf{0.92}	 & 	0.83	&	0.87  & \textbf{	0.97}	 &	 \textbf{0.97} &	0.97 	 &	\textbf{0.97}  &		\textbf{0.98} &	0.97 \\
RiMOM	 &	\textbf{0.71}  &		0.48  &	0.57  &		0.57	  &	\textbf{0.71} &	0.62  &		0.32  &		\textbf{0.84}	&	0.45   &	0.96  &		0.34	&	0.50  &	0.78	  &	0.81 &	0.79 \\
ObjectCoref &	-	  &	-	 &	-  &	-	  & 	-  &	- &		-	  &	- &	- &		-	&	-  &	-	&	-  &	-	  &	- \\
\hline \hline 
Dataset Pair  / &	\multicolumn{3}{|c|}{Dailymed}  &	\multicolumn{3}{|c|}{Dailymed} &	\multicolumn{3}{|c|}{Dailymed}  &	\multicolumn{3}{|c|}{Dailymed} & \multicolumn{3}{|c|}{Drugbank }\\
 Approaches &	\multicolumn{3}{|c|}{DBpedia}  &	\multicolumn{3}{|c|}{LinkedCT} &	\multicolumn{3}{|c|}{TCM}  &	\multicolumn{3}{|c|}{Sider} & \multicolumn{3}{|c|}{Sider }\\
\hline 
	& P &	R & F1	 & P	  & R	 & F1 & P & 	R	& F1 & P  &	R & F1 & P  &	R  & F1\\
\hline 
SERIMI	 &	 \textbf{0.61}	 &	\textbf{ 0.33} & 0.43&		\textbf{0.23} & 0.05 & 0.08&	\textbf{0.23} &	\textbf{0.91} & 0.37&	0.54 &	0.87 & 0.67	 &	\textbf{0.33} &		0.92 & 0.48\\
RiMOM &		0.25 &		0.29 &	0.26 &		0.07 &		0.24 &	0.11 &		0.16	 & 	0.54	&	0.23  & 	\textbf{0.57}	 & 	0.71	&	0.62  & - & - &	 -\\
ObjectCoref &		-  &		-  &	- &		-	 & 	-	&	- & 	-  &		-   &	- &		0.55  &	\textbf{	0.99}	&	0.70  & 	0.30 &	\textbf{0.99} &	0.46 \\
\hline \hline 
Dataset Pair  / &	\multicolumn{3}{|c|}{ Diseasome	}  &	\multicolumn{3}{|c|}{Person11} &	\multicolumn{3}{|c|}{Person21}  &	\multicolumn{3}{|c|}{Restaurant1}  &	\multicolumn{3}{|c|}{ } \\

Approaches  / &	\multicolumn{3}{|c|}{ Sider	}  &	\multicolumn{3}{|c|}{	Person12} &	\multicolumn{3}{|c|}{Person22}  &	\multicolumn{3}{|c|}{Restaurant2}  &	\multicolumn{3}{|c|}{ } \\
 \hline   
 	& P &	R & F1	 & P	  & R	 & F1  & P & 	R	& F1 &  P & 	R	& F1 &		   &			 	  & \\
\hline 
SERIMI	 &	0.83 &		\textbf{0.90} &	0.87 & 		\textbf{1.00}	 &	\textbf{1.00}  &	1.00 &		0.56 &		0.39  &	0.46 &		 0.77  &			 0.77 	  &	0.77   &		   &			 	  &	\\
RiMOM &		- &		-  &	- &		1.00	 &	1.00 &1.00 &		0.95 &		\textbf{0.99}	&	0.97 &	0.86	&	 0.77 	 &	0.81	&		   &			 	  &\\
ObjectCoref	 &	0.84	 &	0.67	&	0.74 &	1.00 &		0.99 &	0.99 &	\textbf{1.00} &		0.90 &	0.95 &	 \textbf{0.99 } &		 \textbf{0.80}	 &	0.88 &		   &			 	  &\\
\hline 
\end{tabular}  
\end{table*} 
 \vspace{-40pt} 
\end{center}
As we can see in Table 4, SERIMI outperformed the alternative approaches in 70\% of the cases, and in those cases it substantially improved F1 by 10\% on average. Sider-Diseasome and Sider-Drugbank were problematic cases for the alternative approaches, where SERIMI achieved a gain of 42\% and 47\% in F1, respectively. 
%Also the improvement in the case of DailyMed-DBpedia is substantial, where SERIMI added a gain of 17\% in precision compared to the alternative approaches. 
It seems that 
%in all these cases,
SERIMI was more successful in refining candidates. 
% After blocking, SERIMI could filter out many irrelevant candidates, resulting i
%In this case, considering that the blocking strategy was the main component responsible for the increase in the recall, it shows a significant improvement in the precision caused by the class-based disambiguation. 
For example, there were many candidate instances labeled ``Magnesium'' in DBPedia (e.g., ``Isotopes\_of\_magnesium'', ``Magnesium'', ``Category:Magnesium'', ``Book:Magnesium''). SERIMI used information within the other pseudo-homonyms sets to resolve this ambiguity. Because there were much more instances of the type drugs in the other pseudo-homonyms sets (e.g., \verb+Morphine+, \verb+Diazepam+, \verb+Diclofenac+) than instances of the type \verb+book+ and \verb+category+, SERIMI was able to select the correct instance \verb+Magnesium+ that belongs to the class \verb+drugs+. 
%An interesting conclusion about this case is that, in general, SERIMI is very effective to disambiguate pseudo-homonyms sets that contains instances that belong to multiple different classes, as in the previous example.

The poor performance in the Person21-Person22 pair is due to the nature of the data. 
%These datasets where built by adding spelling mistakes %to the properties and literals values of their original datasets. The blocking strategy used by SERIMI is not specifically tuned to deal with this problem of misspellings, and thus, was not able to build the pseudo-homonyms sets properly, resulting in low recall. In contrary, ObjectCoref and RiMOM use more specific functions for dealing with the similarity of misspelled data. In order to further improve the performance of SERIMI, more advanced similarity matching functions may be considered to yield better result candidates, which can then be refined by SERIMI's disambiguation.  
We noticed that the CRDS function did not perform well here 
%for this pair Person21-Person22 
because the instances in these datasets are exactly of the same class (i.e. 
%e.g. instances that have the label \verb+John White+ belong to the class 
\verb+person+). 
%We noted that 
SERIMI is designed for the heterogeneous case where instances to be matched belong to multiple domains. Matching instances in this single domain (same class) setting is indeed ``problematic'' because the idea behind SERIMI is to use class information for disambiguation. Because all candidates belong to the same class, there was not enough information for SERIMI to distinguish and disambiguate instances. Clearly, there exists a wide range of techniques for instance matching in this single domain setting and they should be used instead of SERIMI. We consider our class-based disambiguation as a complementary approach that specifically targets the heterogeneous setting. 
%However, 
%We observe that 
%Extending SERIMI to capture not only the instances, but also their neighbours as features, can provide additional class information (i.e. the classes of the neighbours) that helps the disambiguation even in this single domain setting.  

As a final observation, we noticed that SERIMI had a better performance that the alternative systems, specially in the hard cases(e.g. Dailymed-DBPedia, Dailymed-Linkedct, Dailymed-TCM and Drugbank-Sider ).
%are due to errors in the reference mapping. In the technical report\footnote{https://github.com/samuraraujo/SERIMI-RDF-Interlinking/blob/master/TECH-REPORT.PDF}, we list a few examples. 
%\indent\textbf{Sider-Dailymed:} \\
%  \indent\textit{Source resource:} \\
%	http://www4.wiwiss.fu-berlin.de/sider/resource/drugs/151165 \\
%\indent\textit{Current reference alignment:} \\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/drugs/1050 \\
% \indent\textit{Missing alignment: }\\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/drugs/1618 \\
%\indent\textbf{Dailymed-DBPedia:} \\
%\indent\textit{ Source resource:} \\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/organization/Allergan,\_Inc. \\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/organization/AstraZeneca \\
%    http://www4.wiwiss.fu-berlin.de/dailymed/resource/organization/B.\_Braun \\
%\indent\textit{ Missing alignment: } \\
%	http://dbpedia.org/resource/Allergan \\
%	http://dbpedia.org/resource/AstraZeneca \\
%	http://dbpedia.org/resource/B.\_Braun\_Melsungen 
%Although this reference mapping is thus not ideal, it is the best known ground truth provided by the community. This quality problem affects all systems equally. Hence, the comparisons provided here still yields fair conclusions. Yet, this observation made here suggests that for a better understanding of instance matching solutions, some more work is needed in the future for establishing a higher quality benchmark. 

In conclusion, we observed a considerable improvement of accuracy when we applied the proposed class-based disambiguation to results obtained from a simple blocking procedure. 
%We expect further improvement when more sophisticated techniques for blocking and computing results are employed. 
Our approach is specially recommended for cases where there are few overlaps between the schemas of the instances being matched. SERIMI  is available at GitHub \footnote{https://github.com/samuraraujo/SERIMI-RDF-Interlinking}.

