

\section{Experimental Evaluation}

In this section we describe the employed collections, evaluation metrics and alternative approaches applied in our evaluations. Our evaluation is based on the instance-matching track of the Ontology Alignment Evaluation Initiative (OAEI 2010). This track focuses on evaluating the effectiveness of instance-matching approaches over Web Data, which is exactly the goal of the evaluation here. 

\subsection{Experiment Setting}
\textbf{Collections.} We used the life science (LS) collection (which includes DBPedia,\footnote{http://dbpedia.org/About}  Sider,\footnote{http://www4.wiwiss.fu-berlin.de/sider/}  Drugbank,\footnote{http://www4.wiwiss.fu-berlin.de/drugbank/}  LinkedCT,\footnote{http://data.linkedct.org/}  Dailymed\footnote{http://www4.wiwiss.fu-berlin.de/dailymed/}  TCM,\footnote{http://code.google.com/p/junsbriefcase/wiki/RDFTCMData} and Diseasome\footnote{http://www4.wiwiss.fu-berlin.de/diseasome/} ) and the Person-Restaurant (PR) collection provided by this benchmark. 

Tables 2 and 3 show the main statistics about these datasets (the number of instances, and the number of reference mappings that make up the ground truth). 
%These datasets capture several cases of ambiguity, which make instance matching hard. For example, two distinct people called John White with different birth date. 
As shown in Table 3, we evaluated pairs of datasets that were also used by the other systems that participated in this benchmark. 
%The Person-Restaurant collection contains 3 pairs of datasets. Two of these pairs describe people and the other pair describes restaurants. The other datasets make up 11 pairs.  

We have loaded the datasets into an open-source instance of Virtuoso Universal server\footnote{http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/} installed on a local workstation, where around 2GB of data were loaded.  An exception was the DBpedia dataset, which we accessed online via its SPARQL endpoint. 

\begin{table} 
\vspace{-10pt}
\caption{Number of instances per dataset. } 
\begin{tabular}{l c l c l cl c} 
\hline\noalign{\smallskip} 
Dataset	&  Instances & Dataset &  Instances & Dataset &  Instances & Dataset &  Instances \\
\noalign{\smallskip} 
\hline 
\noalign{\smallskip} 
Diseasome & 8150 & Drugbank & 19692 & Dailymed & 10004 & Sider & 2673 \\
Tcm & 24579 & Person21 & 2400 & Person22 & 800 & Person11 & 1000 \\
Person12 & 2000 & Restaurant1 & 339 & Restaurant2 & 2256 & DBPedia & $>$ 1000000 \\
\hline  
\end{tabular} 
 \vspace{-10pt}
\end{table} 
 
\begin{table} 
\vspace{-10pt}
\begin{center}
\caption{Number of reference mappings for pairs of datasets evaluated.} 
\begin{tabular}{ lclc } 
\hline\noalign{\smallskip} 
Dataset Pair	& Ref. Mapping  & Dataset Pair	 & Ref. Mapping   \\
\noalign{\smallskip} 
\hline 
\noalign{\smallskip} 
Sider$\rightarrow$ DBPedia  &	1509  & Dailymed$\rightarrow$DBpedia &	2549 \\
Sider$\rightarrow$Dailymed &	1634 &	Dailymed$\rightarrow$LinkedCT &	27729 \\
Sider$\rightarrow$Diseasome & 	173 &	Dailymed$\rightarrow$TCM &	33 \\
Sider$\rightarrow$DrugBank &	1140	& Dailymed$\rightarrow$Sider &	1592 \\
Sider$\rightarrow$TCM &	171 &	Drugbank$\rightarrow$Sider  &	284 \\
Diseasome$\rightarrow$Sider & 	238 &	Person11$\rightarrow$Person12 &	500 \\
Restaurant1$\rightarrow$Restaurant2 &	112 &	Person21$\rightarrow$Person22 &	400 \\
\hline 

\end{tabular} 
\end{center} 
\vspace{-10pt}
\end{table} 


\textbf{Evaluation metrics and alternative approaches.} In order to evaluate the effectiveness of the proposed instance matching approach, we used precision and recall (and F1 measure, which considers both). We considered as true positives the reference mapping in the ground truth. False positives are the mappings found by SERIMI that do not exist in the ground truth. 

We used two alternative approaches in our experiments: RiMOM and ObjectCoref. As discussed, these two preliminary works are the main solutions for effective instance matching in this heterogeneous setting. Using the same datasets and reference mappings, we aim to provide a fair and direct comparison. In addition, we investigated how SERIMI performs using different settings for the parameters $\delta$ and $k$. 
%This way, we assess how these parameters affect the overall performance of SERIMI. 

\subsection{Experiment Results}
%In this section we show the results of our experiments. 
Fig. 4 and Fig. 5 shows SERIMI's performance when we vary $\delta$. We observed that the standard deviation of precision and recall is close to zero in the cases where the pseudo-homonym sets are small, e.g. have cardinality equal to 1 (Drugbank-Sider). The parameters $\delta$ and $k$ have no effect in these cases because the same (number of) instances are selected (e.g. only one) as results. Otherwise, performances vary because differences in $\delta$ (and also differences in $k$) lead to a different selection of instances. This seems to have a clear impact on precision and recall. The use of the $\delta_{m}$, an automatically computed threshold as discussed in Section 4.3, performs relatively well on average. Therefore, for all other experiments shown in this paper, we used $\delta = \delta_{m}$. 

\begin{figure}
\begin{minipage}[b]{0.5\linewidth} % A minipage that covers half the page
\centering
\includegraphics[width=5.5cm]{fig8.png}
\caption{Top-k F-measure}
\end{minipage}
\hspace{0.2cm} % To get a little bit of space between the figures
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=5.5cm]{fig7.png}
\caption{$\delta_{threshold}$ F-measure}
\end{minipage}
\vspace{-10pt}
\end{figure}

%In Table 3 we show the performance of SERIMI in solving instance matching for all pairs of datasets discussed in Section 5, in comparison with our two baselines: RiMOM and ObjectsCoref.

\begin{table} 
\caption{It shows SERIMI, RiMOM and ObjectCoref the precision and recall for all dataset pairs. ObjectCoref's results are not available for all pairs of datasets.} 
\scriptsize\tt
\begin{tabular}{ |c | c | c | c  | c | c | c | c | c | c | c |  c | c | c | c | c |} 
\hline
Dataset Pair /  &	\multicolumn{3}{|c|}{Sider}  &	\multicolumn{3}{|c|}{Sider} &	\multicolumn{3}{|c|}{Sider  	}  &	\multicolumn{3}{|c|}{Sider} & \multicolumn{3}{|c|}{Sider  }\\
  Approaches &	\multicolumn{3}{|c|}{DBPedia}  &	\multicolumn{3}{|c|}{Dailymed} &	\multicolumn{3}{|c|}{Diseasome  	}  &	\multicolumn{3}{|c|}{DrugBank} & \multicolumn{3}{|c|}{TCM  }\\
\hline 
	& P &	R & F1	 & P	  & R & F1	  & P & 	R	 & F1 & P  &	R & F1 & P  &	R & F1\\
\hline 
SERIMI &		0.50 &		\textbf{0.62} &	0.55 &	\textbf{0.78} &		0.58 &		0.66&	\textbf{0.92}	 & 	0.83	&	0.87  & \textbf{	0.97}	 &	 \textbf{0.97} &	0.97 	 &	\textbf{0.97}  &		\textbf{0.98} &	0.97 \\
RiMOM	 &	\textbf{0.71}  &		0.48  &	0.57  &		0.57	  &	\textbf{0.71} &	0.62  &		0.32  &		\textbf{0.84}	&	0.45   &	0.96  &		0.34	&	0.50  &	0.78	  &	0.81 &	0.79 \\
ObjectCoref &	-	  &	-	 &	-  &	-	  & 	-  &	- &		-	  &	- &	- &		-	&	-  &	-	&	-  &	-	  &	- \\
\hline \hline 
Dataset Pair  / &	\multicolumn{3}{|c|}{Dailymed}  &	\multicolumn{3}{|c|}{Dailymed} &	\multicolumn{3}{|c|}{Dailymed}  &	\multicolumn{3}{|c|}{Dailymed} & \multicolumn{3}{|c|}{Drugbank }\\
 Approaches &	\multicolumn{3}{|c|}{DBpedia}  &	\multicolumn{3}{|c|}{LinkedCT} &	\multicolumn{3}{|c|}{TCM}  &	\multicolumn{3}{|c|}{Sider} & \multicolumn{3}{|c|}{Sider }\\
\hline 
	& P &	R & F1	 & P	  & R	 & F1 & P & 	R	& F1 & P  &	R & F1 & P  &	R  & F1\\
\hline 
SERIMI	 &	 \textbf{0.61}	 &	\textbf{ 0.33} & 0.43&		\textbf{0.23} & 0.05 & 0.08&	\textbf{0.23} &	\textbf{0.91} & 0.37&	0.54 &	0.87 & 0.67	 &	\textbf{0.33} &		0.92 & 0.48\\
RiMOM &		0.25 &		0.29 &	0.26 &		0.07 &		0.24 &	0.11 &		0.16	 & 	0.54	&	0.23  & 	\textbf{0.57}	 & 	0.71	&	0.62  & - & - &	 -\\
ObjectCoref &		-  &		-  &	- &		-	 & 	-	&	- & 	-  &		-   &	- &		0.55  &	\textbf{	0.99}	&	0.70  & 	0.30 &	\textbf{0.99} &	0.46 \\
\hline \hline 
Dataset Pair  / &	\multicolumn{3}{|c|}{ Diseasome	}  &	\multicolumn{3}{|c|}{Person11} &	\multicolumn{3}{|c|}{Person21}  &	\multicolumn{3}{|c|}{Restaurant1}  &	\multicolumn{3}{|c|}{ } \\

Approaches  / &	\multicolumn{3}{|c|}{ Sider	}  &	\multicolumn{3}{|c|}{	Person12} &	\multicolumn{3}{|c|}{Person22}  &	\multicolumn{3}{|c|}{Restaurant2}  &	\multicolumn{3}{|c|}{ } \\
 \hline   
 	& P &	R & F1	 & P	  & R	 & F1  & P & 	R	& F1 &  P & 	R	& F1 &		   &			 	  & \\
\hline 
SERIMI	 &	0.83 &		\textbf{0.90} &	0.87 & 		\textbf{1.00}	 &	\textbf{1.00}  &	1.00 &		0.56 &		0.39  &	0.46 &		 0.77  &			 0.77 	  &	0.77   &		   &			 	  &	\\
RiMOM &		- &		-  &	- &		1.00	 &	1.00 &1.00 &		0.95 &		\textbf{0.99}	&	0.97 &	0.86	&	 0.77 	 &	0.81	&		   &			 	  &\\
ObjectCoref	 &	0.84	 &	0.67	&	0.74 &	1.00 &		0.99 &	0.99 &	\textbf{1.00} &		0.90 &	0.95 &	 \textbf{0.99 } &		 \textbf{0.80}	 &	0.88 &		   &			 	  &\\
\hline 
\end{tabular} 
\vspace{-10pt}
\end{table} 

As we can see in Table 4, SERIMI outperforms the alternative approaches in 70\% of the cases, and in those cases we substantially improved F1 by 10\% on average. Sider-Diseasome and Sider-Drugbank are problematic cases for the alternative approaches, where SERIMI achieved a gain of 42\% and 47\% in F1, respectively. Also the improvement in the case of DailyMed-DBpedia is substantial, where SERIMI added a gain of 17\% in precision compared to the alternative approaches. It seems that in all these cases, SERIMI is more successful in refining candidates. 
% After blocking, SERIMI could filter out many irrelevant candidates, resulting i
%In this case, considering that the blocking strategy was the main component responsible for the increase in the recall, it shows a significant improvement in the precision caused by the class-based disambiguation. 
For example, there are many candidate instances labeled ``Magnesium'' in DBPedia (e.g., ``Isotopes\_of\_magnesium'', ``Magnesium'', ``Category:Magnesium'', ``Book:Magnesium''). SERIMI used information within the other pseudo-homonyms sets to resolve this ambiguity. Because there were much more instances of the type drugs in the other pseudo-homonyms sets (e.g., \verb+Morphine+, \verb+Diazepam+, \verb+Diclofenac+) than instances of the type \verb+book+ and \verb+category+, SERIMI was able to select the correct instance \verb+Magnesium+, that refer to the class \verb+drugs+. 
%An interesting conclusion about this case is that, in general, SERIMI is very effective to disambiguate pseudo-homonyms sets that contains instances that belong to multiple different classes, as in the previous example.

The poor performance in the Person21-Person22 pair is due to the nature of the data. These datasets where built by adding spelling mistakes to the properties and literals values of their original datasets. The blocking strategy used by SERIMI is not specifically tuned to deal with this problem of misspellings, and thus, was not able to build the pseudo-homonyms sets properly, resulting in low recall. In contrary, ObjectCoref and RiMOM use more specific functions for dealing with the similarity of misspelled data. In order to further improve the performance of SERIMI, more advanced similarity matching functions may be considered to yield better result candidates, which can then be refined by SERIMI's disambiguation.  

We further noticed that the CRDS function does not work properly for this pair Person21-Person22 because the instances in these datasets are exactly of the same class, e.g. two instances that have the label \verb+John White+. We note that SERIMI is rather designed for the heterogeneous case where instances to be matched belong to multiple domain. Matching instances in this single domain (same class) setting is indeed ``problematic'' because the idea behind SERIMI is to use class information of disambiguation, i.e. to filter out candidates that do not belong to the class of interest. However,  because all candidates belong to the same class, there is not enough information for SERIMI to distinguish and disambiguate instances. Clearly, there exists a wide range of techniques for instance matching in this single domain setting and they should be used instead of SERIMI. We consider SERIMI as a complementary tool that specifically targets the heterogeneous setting. However, we observe that extending SERIMI to capture not only the instances, but also their neighbours as features, can provide additional class information (i.e. the classes of the neighbours) that helps the disambiguation even in this single domain setting.  

As a final observation, we saw that some poor performances (e.g. for some cases in Dailymed-DBPedia, Dailymed-Linkedct, Dailymed-TCM, Drugbank-Sider and Sider-Dailymed) are due to errors in the reference mapping. In the technical report\footnote{https://github.com/samuraraujo/SERIMI-RDF-Interlinking/blob/master/TECH-REPORT.PDF}, we list a few examples. 
%\indent\textbf{Sider-Dailymed:} \\
%  \indent\textit{Source resource:} \\
%	http://www4.wiwiss.fu-berlin.de/sider/resource/drugs/151165 \\
%\indent\textit{Current reference alignment:} \\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/drugs/1050 \\
% \indent\textit{Missing alignment: }\\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/drugs/1618 \\
%\indent\textbf{Dailymed-DBPedia:} \\
%\indent\textit{ Source resource:} \\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/organization/Allergan,\_Inc. \\
%	http://www4.wiwiss.fu-berlin.de/dailymed/resource/organization/AstraZeneca \\
%    http://www4.wiwiss.fu-berlin.de/dailymed/resource/organization/B.\_Braun \\
%\indent\textit{ Missing alignment: } \\
%	http://dbpedia.org/resource/Allergan \\
%	http://dbpedia.org/resource/AstraZeneca \\
%	http://dbpedia.org/resource/B.\_Braun\_Melsungen 
Although this reference mapping is thus not ideal, it is the best known ground truth provided by the community. This quality problem affects all systems equally. Hence, the comparisons provided here still yields fair conclusions. Yet, this observation made here suggests that for a better understanding of instance matching solutions, some more work is needed in the future for establishing a higher quality benchmark. 

In conclusion, we observed a considerable improvement of accuracy when we applied the proposed class-based disambiguation to results obtained from a simple blocking procedure. We expect further improvement when more sophisticated techniques for blocking and computing results are employed. Our approach is specially recommended for cases where there are few overlaps between the schemas of the instances being matched. For single domain matching tasks (i.e. perfect overlap), we recommend existing work, and will integrate them into our matching procedure in future work. SERIMI's implementation is open source and available for download at GitHub \footnote{https://github.com/samuraraujo/SERIMI-RDF-Interlinking}.

