Define a subsection systems. describe all systems
Different version for SERIMI
Serimi vs Alternative Approaches
\section{Experimental Evaluation}

In this section, we describe our evaluation that is based on the instance-matching track of the Ontology Alignment Evaluation Initiative (OAEI) 2010 and 2011. This track focuses on evaluating the effectiveness of instance-matching approaches over Web data, which is exactly the goal of the evaluation here. As described before, we focus on generating candidate sets with high recall but consequently highly ambiguous. To increase the accuracy of the match over the ambiguous sets, we combined SERIMI with direct match approach (S+SR+DM), to exploit the overlap between the source and target data.

\subsection{Experiment Setting}
\textbf{Data Collections.} We used the data collection of the OAEI 2010 and 2011 in our evaluations. From the OAEI 2010 collections, the life science (LS) collection (which includes DBPedia,
%\footnote{http://dbpedia.org/About}  
Sider,
%\footnote{http://www4.wiwiss.fu-berlin.de/sider/}  
Drugbank,
%\footnote{http://www4.wiwiss.fu-berlin.de/drugbank/}  
LinkedCT,%\footnote{http://data.linkedct.org/}  
 Dailymed, 
%\footnote{http://www4.wiwiss.fu-berlin.de/dailymed/} 
% 
TCM,
%\footnote{http://code.google.com/p/junsbriefcase/wiki/RDFTCMData} 
and Diseasome) 
%\footnote{http://www4.wiwiss.fu-berlin.de/diseasome/} 
and the Person-Restaurant (PR) collection as used. The task in this benchmark was to match entities from each of this datasets to another. All cases evaluated by other participants in this challenge are evaluated here. From the 2011 data, the New York Times (NYT) collection was used. The task in this benchmark was to match NYT entities to DBPedia, Geonames and Freebase. All this cases are evaluated here. 

\textbf{Evaluation metrics} We used precision, recall and F1, as propose by OAEI benchmark, to measure the effectiveness of the proposed approach. We considered as true positives the provided reference mapping (the ground truth). False positives are the mappings found by SERIMI that do not exist in the ground truth. 

\textbf{Alternative approaches.}  Using the OAEI 2010 data collection we compared SERIMI with RIMOM and ObjectCoref2010 using the results published in OAEI 2010; and we compared SERIMI with ObjectCoref2012, using the results recently published by ObjectCoref2012 at WWW2012. Using the OAEI 2011 data collection, we compared SERIMI with AgreenmentMaker and Zhishi.links and using the results publised in OAEI 2011; and we compared SERIMI with KnoFuss+GA, using the results recently published by KnoFuss+GA at ESWC2012. Those alternative approaches are the state-of-the-art for instance matching over RDF data.

\subsection{Experiment Results}

Table \ref{oaei2010oc} and Table \ref{oaei2010rimon} show SERIMI results for OAEI 2010 collection compared to ObjectCoref and RIMON.

\begin{center}
\begin{table}
\scriptsize
\label{oaei2010oc}
\centering
\caption{F1 Performance for SERIMI and ObjectCoref2010 , over OAEI 2010 collection} 
\begin{tabular}{ | c | c  | c | } 
\hline
Datasets & SERIMI  & ObjectCoref2010 \\
\hline
DAILYMED-SIDER & \textbf{1.0}  & 0.70\\
DISEASOME-SIDER & \textbf{0.97} & 0.74\\
DRUGBANK-SIDER & \textbf{1.0} & 0.46\\
PERSON11-PERSON12 & 0.95 &  \textbf{0.99} \\
PERSON21-PERSON22 & 0.91 &  \textbf{0.95} \\
RESTAURANT1-RESTAURANT2 & \textbf{0.97} &   0.88 \\
\hline 
AVERAGE  & \textbf{0.97}	&	0.79 \\
\hline 
\end{tabular}  
\end{table}  
\end{center} 

As shown in Table \ref{oaei2010rimon}, SERIMI (97\% F1) outperforms ObjectCoref2010 (79\% F1), in average. 
ObjectCoref2010 outperformed SERIMI in the Person collection because it used a better direct match approach than we used, more sensitive to the spelling mistakes that was engineered in this collection. Beside of that, the class disambiguation played no role in this collection because all entities belong to a single class person. The better SERIMI performance in the other cases is attributed to candidate set generation process which produced candidate sets with high recall and precision. The effort to disambiguate candidate in those cases were minimal.

\begin{center}
\begin{table}[h]
\label{oaei2010rimon}
\scriptsize
\centering
\caption{F1 Performance for SERIMI and RIMON, over OAEI 2010 collection} 

\begin{tabular}{ | c | c | c |  } 
\hline
Datasets & SERIMI & RIMOM  \\
\hline
DAILYMED-SIDER & \textbf{1.0} & 0.62 \\
SIDER-DAILYMED & \textbf{0.74} & 0.62  \\
SIDER-DISEASOME & \textbf{0.89} &  0.45 \\
SIDER-DRUGBANK & \textbf{0.98} & 0.50 \\
SIDER-TCM & \textbf{0.99} & 0.79  \\
PERSON11-PERSON12 & 0.95 & \textbf{1.0}  \\
PERSON21-PERSON22 & 0.91 & \textbf{0.97}  \\
RESTAURANT1-RESTAURANT2 & \textbf{0.97} &  0.81  \\
\hline 
AVERAGE  & \textbf{0.93}	&	0.72	 \\
\hline 
\end{tabular}  
\end{table}  
\end{center} 

As shown in Table \ref{oaei2010oc}, SERIMI (93\% F1) outperforms RIMON (72\% F1), in average. 
RIMON outperformed SERIMI in the Person collection for the same reasons that we discuss in the previous paragraph. SERIMI had a considerable performance gain life science collection. The class based disambiguation played a important role in the SIDER-DAILYMED case, where 1-to-many mapping exist and the same name was given for instances of a class Drug and Ingredient. The class based disambiguation (S+SR) was able to disambiguate the right cases with 97 \% precision. However, when combined with direct match approach (S+SR+DM) its precision decreased to 72\%, and its recall increased from 74\% to 76 \% (see Table \ref{tablef1s}).

SERIMI had the same average performance of 95\% F1 than ObjectCoref2012 (over Person and Restaurant collections).

\begin{center}
\begin{table*}[]
\label{oaei2012table}
\scriptsize
\centering
\caption{F1 Performance for all systems over OAEI 2012 collection} 

\begin{tabular}{ | c | c | c | c | c |  } 
\hline
Datasets & SERIMI & KnoFuss+GA &  AgreementMaker  & Zhishi.links \\
\hline
NYT-DBPEDIA-CORP & 0.91 & 0.92 & 0.74 & 0.91 \\
NYT-DBPEDIA-GEO & 0.82 & 0.89 & 0.69 & 0.92 \\
NYT-DBPEDIA-PER & 0.95 & 0.97 & 0.88 & 0.97 \\
NYT-FREEBASE-CORP & 0.9 & 0.92 & 0.80 & 0.87 \\
NYT-FREEBASE-GEO & 0.89 & 0.93 & 0.85 & 0.88 \\
NYT-FREEBASE-PER & 0.95 & 0.95 & 0.96 & 0.93 \\
NYT-GEONAMES & 0.87 & 0.90 & 0.85 & 0.91 \\
\hline 
AVERAGE  & 0.90 &	0.93	&	0.82	&	0.91 \\
\hline 
\end{tabular}  
\end{table*}  
\end{center} 
As we can see in Table 4, SERIMI outperformed the alternative approaches in 70\% of the cases, and in those cases it substantially improved F1 by 10\% on average. Sider-Diseasome and Sider-Drugbank were problematic cases for the alternative approaches, where SERIMI achieved a gain of 42\% and 47\% in F1, respectively. 
%Also the improvement in the case of DailyMed-DBpedia is substantial, where SERIMI added a gain of 17\% in precision compared to the alternative approaches. 
It seems that 
%in all these cases,
SERIMI was more successful in refining candidates. 
% After blocking, SERIMI could filter out many irrelevant candidates, resulting i
%In this case, considering that the blocking strategy was the main component responsible for the increase in the recall, it shows a significant improvement in the precision caused by the class-based disambiguation. 
For example, there were many candidate instances labeled ``Magnesium'' in DBPedia (e.g., ``Isotopes\_of\_magnesium'', ``Magnesium'', ``Category:Magnesium'', ``Book:Magnesium''). SERIMI used information within the other pseudo-homonyms sets to resolve this ambiguity. Because there were much more instances of the type drugs in the other pseudo-homonyms sets (e.g., \verb+Morphine+, \verb+Diazepam+, \verb+Diclofenac+) than instances of the type \verb+book+ and \verb+category+, SERIMI was able to select the correct instance \verb+Magnesium+ that belongs to the class \verb+drugs+. 
%An interesting conclusion about this case is that, in general, SERIMI is very effective to disambiguate pseudo-homonyms sets that contains instances that belong to multiple different classes, as in the previous example.
 


