\section{Evaluation}
\label{sec:evaluation}
Our experiments are based on the OAEI 2010 and 2011 instance-matching track. 
%In Section 4.1 we introduce our evaluation metrics, and a measure used to analyse the complexity of the matching task that we evaluate. In section 4.2, we describe the experiment settings and analyse the complexity each matching task. In Section 4.3, 
%%This track focuses on evaluating the effectiveness of instance-matching approaches over Web data, which is the main goal of the evaluations here. 
%we evaluated the performance (accuracy and time) for different configurations of SERIMI. 
%We observed that 
%SERIMI using direct match and class-based matching had the best performance compared each matching strategy individually. 
%SERIMI with the proposed candidate set reduction algorithm was 20\% faster than SERIMI without it. 
We observed that CBM was useful and complementary to direct matching. 
%: the combination improved F1 results by 0.04 and 0.02 compared to class-based matching and direct matching alone, respectively. 
For OAEI 2010, this combination increased average F1 result of the second best by 0.21; and, for OAEI 2011 data, SERIMI improves the quality of recently proposed approaches, \emph{PARIS}~\cite{DBLP:journals/pvldb/SuchanekAS11} and \emph{SIFI-Hill}~\cite{DBLP:journals/pvldb/WangLYF11}, by 0.44 and 0.09, respectively. Compared to the best system participated at OAEI 2011, SERIMI achieved the same performance. However, as opposed to that, SERIMI does not assume domain knowledge and manually engineered mappings. 

%the results of existing systems. When taking all tasks in OAEI 2010 into account, it increases average F1 result of the second best by 0.21 (from 0.76 to 0.97). For 2011 data, SERIMI also greatly improves the results of recently proposed approaches (\emph{PARIS}~\cite{DBLP:journals/pvldb/SuchanekAS11} and \emph{SIFI-Hill}~\cite{DBLP:journals/pvldb/WangLYF11}). Compared to the best system participated at OAEI 2011, SERIMI achieved the same performance. However, while this system leverages domain knowledge and assumes manually engineered mappings, our approach is generic, completely automatic and does not assume any training data. 
%Also, we compared SERIMI's accuracy with results published by OAEI as well as other state-of-the-art systems that did not participate on OAEI. Overall, SERIMI had the best accuracy than state-of-the-art systems evaluated. Moreover, the results indicate the class-based matching is a competitive matching strategy; and most importantly,  complementary to direct matching, when the matching task lacks overlap among source and target instances.

 

\textbf{Evaluation Metrics.}
We used the standard F1  to measure the result accuracy (also employed by OAEI). $F1 = 2 \times \frac{Recall \times Precision}{Recall + Precision}$ is  the harmonic mean between precision (proportion of correct matches among matches found) and recall (proportion of  matches found among all actual matches). To compute F1, the provided reference mappings were used as the ground truth. 

 

 

%) as true positives and false positives were those found by the systems that are not in the ground truth. 

\textbf{Data.} We used all the OAEI 2010 data employed by participants, which include the life science (LS) collection containing DBPedia,
%\footnote{http://dbpedia.org/About}  
Sider,
%\footnote{http://www4.wiwiss.fu-berlin.de/sider/}  
Drugbank,
%\footnote{http://www4.wiwiss.fu-berlin.de/drugbank/}  
%LinkedCT, %\footnote{http://data.linkedct.org/}  
 Dailymed, 
%\footnote{http://www4.wiwiss.fu-berlin.de/dailymed/} 
% 
Tcm
%\footnote{http://code.google.com/p/junsbriefcase/wiki/RDFTcmData} 
and Diseasome
%\footnote{http://www4.wiwiss.fu-berlin.de/diseasome/} 
and the Person-Restaurant (PR) dataset. From OAEI 2011, the datasets used were New York Times (Nyt), DBPedia, Geonames and Freebase. %The matching task studied is not to find matches within but across datasets. 
Given a pair of datasets, the task was to match instances in one dataset to instances in the other. The source class of instances for each dataset was defined by the OAEI. Detailed information can be found in their website \footnote{http://oaei.ontologymatching.org}.  Table~\ref{table:datadescription} and~\ref{table:mappingpairs} show some relevant statistics related to the datasets and matching tasks, respectively.
\begin{center}
\begin{table}[h]
%\scriptsize
%\tiny
\scriptsize
\centering
\caption{Number of triples in each dataset.} 
\begin{tabular}{ | c | c  | c | c | } 

\hline
Dataset & Triples & Dataset & Triples \\
\hline
Nyt &  350.349 & Person11 & 9.000 \\
Freebase &  3.554.824 & Person12 &  7.270 \\
DBPedia &   $>$10.000.000 & Person21 & 10.800 \\
Geonames & $>$10.000.000 & Person22 & 5.944 \\
Sider &  96.204 & Rest1 & 1.130 \\
Tcm &   111.021 &  Rest2 & 7.520 \\ 
Dailymed &  131.068 & Drugbank & 507.500\\
Diseasome &  69.545 &  - &  - \\ 
\hline 
\end{tabular}  
\label{table:datadescription}
\end{table}  
\end{center} 

\textbf{Systems.} All computed results were done using an Intel Core 2 Duo, 2.4 GHz, 4 GB RAM, using a FUJITSU MHZ2250BH FFS G1 248 GB hard disk. The SERIMI implementation used in these experiments is available for download\footnote{https://github.com/samuraraujo/SERIMI-RDF-Interlinking} at GitHub. It was implemented in Ruby.   Except for SIFI and PARIS, we copied all available results as published in the OAEI benchmarks. We used the available authors implementation\footnote{http://webdam.inria.fr/paris/} for PARIS, and the best effort implementation in Java for SIFI-Hill (SIFI). 
 
\subsection{Task Analysis}
The suitability of direct matching and class-based matching for a task is related to the complexity of the matching task itself. So far, there is no method that suits  all kinds of matching tasks, because data are imperfect in this heterogeneous setting. As we will show, the widely employed assumption that attributes between datasets largely overlaps is not true for all matching tasks, or for all instances within a matching task. We observed the accuracy of each matching technique largely depends on the distribution of the predicates and values in the source and target dataset. In order to obtain a better understanding of how these distributions affect the accuracy of a matching technique, below we propose the use of coverage (Cov) and discriminative power (Disc) as measures for analyzing the task complexity.

\begin{equation}
Cov(p,S,G) = \frac{|\{s|\langle s,p,o \rangle \in G \land s\in S \}|}{|S|}
\end{equation}
\begin{equation}
Disc(p,S,G) = \frac{|\{o|\langle s,p,o \rangle \in G \land s\in S \}|}{|\{t|t=\langle s,p,o \rangle \in G \land s\in S \}|}
\end{equation}

where $S$ is the given set of instances in the dataset $G$.

The coverage of a predicate $p$ measures the number of instances in $S$ that $p$ occurs. A predicate $p$ with low coverage indicates that $p$ occurs in a few instances; therefore, when utilizing values of $p$ for finding matches, we may miss some candidates. The discriminative power measures the diversity of predicate values. A predicate $p$ has low discriminative power when many instances have the same values for $p$; therefore, using values of $p$ for matching, results in larger candidate sets. Consequently, datasets with many predicates that have low coverage and low discriminative power are harder to match. 
%Notice that coverage is associate to recall and discriminative power to precision.  

%On the other hand, a dataset with low coverage predicates means that many predicates have to be used to cover all instances; while low discriminative power implies high ambiguity, i.e. there are many instances that share the same value. 

Using these two measures, we introduce a task complexity measure $TC$ that defines the complexity of matching a set of instances $S$ with $T$, where $T=\bigcup_{ c \in C(S)} c$.  First, we introduce the \textit{predicate complexity measure} ($PCM$) that measures the complexity of  matching a set of instances $X$ based on coverage and discriminative power of a set of predicates $P$ in $G$. 
 
\begin{equation}
\footnotesize
  PCM(P,X,G) =   \frac{\sum_{a \in P} Cov(a, X, G) + Disc(a, X, G)}{2 |P|} 
\end{equation} 
 
The size of the candidates sets in $C(S)$ is also an indication of complexity because sets with more candidates may have more ambiguous candidates to filter out. Therefore, we define  $Card(S)$.  Smaller values for $Card(S)$ indicate  that $C(S)$ has bigger candidate sets.

 \begin{equation}
  Card(S)=\frac{|C(S)|}{\sum_{c \in C(S)} |c|}
 \end{equation} 

Finally, we introduce $TC$, defined as:
 
 \begin{equation}
 \footnotesize
TC = 1 - PCM(P_s, S, G_s) \times PCM(P_t, T, G_t) \times Card(S)
\label{eq:tc}
 \end{equation} 
 
 where TC is a value in the interval [0,1], where 0 is less complex and 1 more complex. Table \ref{table:mappingpairs} shows the characteristics of each matching task.  Fig.~\ref{fig:taskvsf1} shows the tasks ordered by TC.  With respect to that, Nyt-Geonames is the most complex task, which on average has around six candidate matches per instance. In this table, some tasks are easier tasks because most of the candidate sets contain only correct matches, or one instance per candidate set (e.g. Sider-Tcm). 
%These tasks have around one candidate per instance, meaning there are no ambiguity such that exactly one candidate can be produced. 

%While candidate selection can largely reduce the number of candidates, it also resulted in some false negatives. Table \ref{table:mappingpairs} shows the recall, precision and F1 for each task w.r.t. their candidate sets. The number of candidates is smaller than the number of matches in most of the cases, indicating that some correct matches were incorrectly rejected (e.g. recall $<$ 1.0) during the candidate selection process.  

\begin{table}[h]
%\tiny
\scriptsize
\centering
\caption{Dataset pairs representing matching tasks, number of comparable predicates (CP) for every task, number of correct matches (Match), number of candidate matches obtained from candidate selection (Cand), mean (MEAN) and standard deviation (STDV) of the number of candidates per instance.} 
\begin{tabular}{ | c | c  | c | c | c | c | c | c | c |  } 
\hline
Dataset Pairs & CP & Match &  Cand &  MEAN &  STDV  \\
\hline
Nyt-DB-Corp & 3 & 1965 & 3839 & 2.0 & 2.01 \\
Nyt-DB-Geo & 4 & 1920 & 9246 & 4.87 & 7.9 \\
Nyt-DB-Per &  5 &  4977 &  7937 & 1.61 & 1.02 \\
Nyt-Freebase-Corp & 2 &  3044 & 3398 & 1.15 & 0.37 \\
Nyt-Freebase-Geo & 3 &  1920 & 2234 & 1.19 & 0.43  \\
Nyt-Freebase-Per & 3 &  4979 & 5090 & 1.04 & 0.19 \\
Nyt-Geonames & 4 & 1789 & 10782 & 6.18 & 9.21 \\

Dailymed-Sider &  8 & 1592 & 1591 & 1.0 & 0.03 \\
Diseasome-Sider & 4 &  238 & 163 & 1.0 & 0.08 \\
Drugbank-Sider & 8 & 284 & 283 & 1.0 & 0.06 \\

Sider-Dailymed & 2&  1634 &  1915 & 2.93 & 2.43 \\
Sider-DB-Drugs & 2 & 734 & 742 & 1.05 & 0.22 \\
Sider-DB-SideEffect & 2 & 775 & 960 & 1.25 & 0.56 \\
Sider-Diseasome &  4 & 173 & 192 & 1.2 & 0.57  \\
Sider-Drugbank & 8 & 1140 & 881 & 1.04 & 0.21 \\
Sider-Tcm & 2 & 171 & 168 & 1.0 & 0.08 \\

Person11-Person12 & 6 &   500 & 1501 & 3.23 & 2.28 \\
Person21-Person22 &  6 & 400 & 476 & 5.06 & 3.2\\
Rest1-Rest2 & 2 & 112 & 117 & 1.06 & 0.5 \\ 
\hline 
\end{tabular}  
\label{table:mappingpairs}
\end{table}  


\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{statistics.pdf}
\caption{Coverage and discriminative power of predicates in the target datasets.} 
\label{fig:coverage}
\end{figure*} 

%We also propose to consider some predicate characteristics to obtain another view of task complexity. Intuitively, a task over two dataset is feasible if it contains comparable predicates based on which direct matching can be performed. Further, direct matching is facilitated when there are a target predicate with high coverage and discriminative power. Target predicates with maximum coverage and maximum discriminative power produce less ambiguous candidate sets, because they have distinct values.  On the other hand, a dataset with low coverage predicates means that many predicates have to be used to cover all instances; while low discriminative power implies high ambiguity, i.e. there are many instances that share the same value. Notice that coverage is associate to recall and discriminative power to precision. These two measures will be used to access the complexity of a matching task.

Fig.\ \ref{fig:coverage} shows the coverage and discriminative power of predicates in the target datasets. In all these datasets, there exist at least one predicate with 100\% coverage (e.g.\ \verb+drugbank:brandName+, \verb+freebase:name+). However, only in some cases, their discriminative power is maximal (e.g.\  \verb+drugbank:brandName+). The DBPedia, Geonames and Freebase datasets seem to be the hardest to match, as both coverage and discriminative power of their predicates are the lowest. In these cases, many predicates have to be used, which is only possible when there are many corresponding predicates in the source. Contrarily, the higher the coverage, the easier is the task because more instances can be covered with fewer predicates (the discriminative power of source predicates is, however, irrelevant because only target predicate values are used for finding matches). Fig.\ \ref{fig:sourcecoverage} shows predicates in the source datasets that are comparable to target predicates, and their coverage.  It indicates there are always some comparable predicates that can be used (Table~\ref{table:mappingpairs} explicitly shows the number of comparable predicates for every task), and that their coverage is always maximal (except for Nyt). In summary, comparable predicates exist for all the given tasks. However, direct matching is harder for some tasks such as Nyt-Geonames and Nyt-DB-Geo as they require using several predicates due to low coverage and discriminative power of target predicates. As the coverage is different for different target instances in those tasks, direct matching may not achieve its full performance due to the lack of comparable predicates at instance level.

\vspace{-2 mm} 
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{sourcecoverage.pdf}
\caption{Coverage of predicates in the sources.} 
\label{fig:sourcecoverage}
\end{figure} 
\vspace{-2 mm} 
%\begin{figure}[h]
%\centering
%\includegraphics[width=0.4\textwidth]{taskcomplexity.pdf}
%\caption{Task complexity (TC) per matching task.} 
%\label{fig:taskcomplexity}
%\end{figure} 


%The results of the evaluations, that we describe next, show that the systems performed worse exactly on the tasks that are more complex. It means, in those tasks where there are missing predicates in the source or target dataset (low coverage), the target predicates have low discriminative power (e.g.\ tasks including Freebase, Geonames and DBPedia), and the number of candidates per instance is high.


%to enable a direct and fair 
%for comparison. 
%As discussed, these two preliminary works are the main solutions for effective instance matching in this heterogeneous setting.  
%In addition, we investigated how SERIMI performs using different settings for the parameters $\delta$ and $k$. 
%This way, we assess how these parameters affect the overall performance of SERIMI. 
\subsection{SERIMI Configurations}
We evaluated 5 different configurations of SERIMI: (1) We evaluated SERIMI's performance without and with candidate set reduction (algorithm in Sec. \ref{sec:setreduction}), referred to as \emph{S} and \emph{S+SR}, respectively. (2) We removed different features proposed for CBM, namely predicates (S+SR-P), datatype properties (S+SR-D), object properties (S+SR-O) and tuples (S+SR-T). (3) We evaluated SERIMI's performance with the top-1 approach (S+SR+TOP1) and the threshold approach (S+SR+TH). (4) Further, direct matching is used (DM), which is compared with SERIMI's performance (class-based matching) combined with direct matching (S+SR+DM). (5) Finally, S+SR+DM+J uses Jaccard instead of FSSim (Eq. \ref{eq:setsimsr}). Except for S+SR+TOP1 and S+SR+TH, top-1 was used instead of the threshold for matching tasks with one-to-one matching.
We measured time efficiency and result accuracy for every configuration, using all mentioned   collections in OAEI 2010 and 2011. The %results are shown in Fig. \ref{fig:time-exp1} and Table \ref{table:timetable} and Fig. \ref{fig:f1-exp1} and Table~\ref{table:tablef1s},
results are shown in  Table \ref{table:timetable} and  Table~\ref{table:tablef1s}, respectively. 


%\begin{figure}[h]
%\centering
%\includegraphics[width=0.40\textwidth]{time.pdf}
%\caption{Average time performance for different SERIMI configurations.} 
%\label{fig:time-exp1}
%\end{figure} 


\begin{table*}[]
\centering
\caption{Time performance for different SERIMI configurations, in seconds.}  
\scriptsize
\begin{tabular}{ | c | c | c | c | c | c || c | c || c | c | c | c | } 
\hline
Datasets & S & S+SR & S+SR+DM & S+SR+DM+J & DM & S+SR+TH & S+SR+TOP1 & S+SR-P & S+SR-D & S+SR-O & S+SR-T \\
\hline
Dailymed-Sider   & 22.6 & \textbf{14.57} & 20.41 & 29.87 & 17.34 & \textbf{14.15} & \textbf{14.15} & 15.33 & 14.44 & 14.03 & \textbf{13.93} \\
Diseasome-Sider & 1.75 & \textbf{1.38} & 1.45 & 1.86 & 1.71 & 1.46 & \textbf{1.37} & \textbf{1.36} & 1.41 & 1.44 & \textbf{1.36} \\
Drugbank-Sider 	& 8.85 & \textbf{8.12} & 8.84 & 10.79 & 8.33 & 7.79 & 7.64 & 7.67 & 8.62 & 7.84 & 7.56 \\
Nytimes-DB-Corp & 64.08 & 57.52 & 62.37 & 73.68 & \textbf{18.03} &\textbf{ 56.34} & 58.62 & \textbf{48.06} & 55.39 & 52.76 & 49.74 \\
Nytimes-DB-Geo  & 606.43 & 440.71 & 470.12 & 435.19 & \textbf{78.63} & 441.12 & \textbf{437.56} & 365.62 & 421.33 & 430.98 & \textbf{413.31} \\
Nytimes-DB-Per & 159.13 & 167.14 & 196.53 & 190.75 & \textbf{96.07 }& 172.58 & \textbf{167.27} & \textbf{145.19} & 163.92 & 163.29 & 162.24 \\
Nytimes-Freebase-Corp & 47.43 & 41.39 & 47.11 & 44.86 & \textbf{27.15} & \textbf{40.34} & 47.19 & 37.92 & \textbf{37.79} & 38.97 & 38.84 \\
Nytimes-Freebase-Geo & 33.41 & 33.34 & 39.44 & 38.81 &\textbf{ 21.99} & \textbf{32.25} & 35.93 & \textbf{28.38 }& 31.91 & 34.59 & 34.05 \\
Nytimes-Freebase-Per & 78.76 & 74.68 & 91.79 & 87.95 & \textbf{57.56} & 75.29 & \textbf{73.65} & \textbf{70.04} & 77.12 & 71.94 & 72.7 \\
Nytimes-Geonames & 73.75 & 47.85 & 54.13 & 45.02 & \textbf{15.72} & 51.51 & \textbf{47.95} & 35.67 & 46.28 & 43.29 & \textbf{35.45 }\\
Person11-Person12 & 3.29 & 3.11 & 3.7 & 3.21 &\textbf{ 1.34} & \textbf{3.04} & 3.26 & 2.58 & 2.86 & 2.8 & \textbf{2.45} \\
Person21-Person22 & 2.73 & 2.86 & 3.08 & 2.38 & \textbf{0.47} & 2.86 & \textbf{2.84} & 2.28 & 2.46 & 2.51 & \textbf{1.79} \\
Rest1-Rest2 & 0.32 & 0.36 & 0.38 & 0.31 &\textbf{ 0.14} & \textbf{0.33} & 0.34 & \textbf{0.27} & 0.33 & 0.29 & \textbf{0.27} \\
Sider-Dailymed & 20.53 & 11.92 & 13.02 & 12.72 & \textbf{9.58} & 12.87 & \textbf{11.64} & 11.3 & 12.24 & 12.72 & \textbf{9.99} \\
Sider-DB-Drugs & 9.52 & 8.3 & 9.63 & 8.81 & \textbf{7.89 }& 8.35 & \textbf{8.05 }& \textbf{7.55} & 8.04 & 7.64 & 8.51 \\
Sider-DB-SideEffect & 4.37 & 4.1 & 3.63 & 3.38 & \textbf{2.3} & 3.5 & \textbf{3.35} & \textbf{2.42} & 2.72 & 2.68 & 2.67 \\
Sider-Diseasome & 1.05 & 0.56 & 0.71 & 0.67 & \textbf{0.48} & \textbf{0.54} & 0.55 & \textbf{0.48} & 0.54 & 0.56 & 0.52 \\
Sider-Drugbank & 22.77 & 18.05 & 16.76 & 17.53 &\textbf{ 10.84} & 14.42 & \textbf{14.09} & \textbf{12.94 }& 13.41 & 14.24 & 12.99 \\
Sider-Tcm & 0.41 & 0.14 & 0.17 & 0.19 & \textbf{0.15} &\textbf{ 0.14} & \textbf{0.14} & 0.15 & \textbf{0.13} &\textbf{ 0.13 }& \textbf{0.13 }\\
\hline
AVERAGE & 61.11 &	49.27	& 54.91	& 53.05 &	\textbf{19.77}	& 49.41 &	\textbf{49.24}	& \textbf{41.85}	& 47.42 &	 47.51 &	45.71\\
\hline 
\end{tabular}  
\label{table:timetable}
\end{table*}  

\textbf{Candidate Set Reduction.} We observed that with candidate set reduction, SERIMI is 20\% faster (average performance of S is 61s vs. 49s for S+SR). The number of candidate sets used in class-based matching could be considerably   reduced. Consequently S+SR performed a much smaller number of comparisons. S+SR did not compromise accuracy as average results for S and S+SR were almost the same (F1 of 0.89 vs. 0.9). 

%\begin{figure}[h]
%\centering
%\includegraphics[width=0.40\textwidth]{f1.pdf}
%\caption{Average F1 performance for different SERIMI configurations.} 
%\label{fig:f1-exp1}
%\end{figure} 

\textbf{Feature Removal.} We could see that the performance improvement resulting from using less features (S+SR  vs. S+SR-P, -D, -O and -T) is consistent but small in most cases. Removing predicates (S+SR-P) has the largest impact, where performance increased by 20\%. This type of features represented a large part of all features used. Hence, processing was much faster without them. Removing features, however, also had a small but consistently negative impact on the accuracy. S+SR-P had the greatest impact on efficiency as well as accuracy; without predicates, F1 is 0.88 (a 0.02 loss in F1). 
%Thus, while removing this type of features yielded an efficiency gain similar to using S+SR, it incurred greater loss in accuracy (it exhibited lower gain to loss ratio).
In general, the results suggest that all proposed features are useful as they contributed to higher accuracy. 

\textbf{Top-1 vs. Threshold.} There were no significant differences in time between the top-1 and the threshold approach (S+SR+TOP1 and S+SR+TH performances were similar). This suggests that selecting the threshold using the method in Sec.\ \ref{sec:threshold} requires  little effort and can be done very efficiently. In terms of accuracy, S+SR+TOP1 had better average performance (86\% F1) than S+SR+TH (84\% F1). More specifically, S+SR+TOP1 yielded better results for tasks with one-to-one mappings.
% between source and target instances, i.e.\ when there was only one correct match for every source instance.
However, S+SR+TOP1 exhibited lower performance than S+SR+TH in two cases (50\% F1 for Person21-Person22 and 56\% F1 for Sider-Dailymed, compared to 86\% and 81\%, respectively), in which one-to-many mappings were needed. 

\textbf{Direct Matching vs. Class-based Matching.} The DM (20s) approach was the fastest, followed by S+SR (50s), S+SR+DM (55s) and S (61s). Class-based matching as performed by S was expensive, requiring a much larger number of comparisons than direct matching (DM). Using candidate set reduction (S+SR), performance could be improved; S+SR is only 2.5 times slower than DM. Their combination (S+SR+DM) is slightly slower than S+SR. In return, S+SR+DM achieved the best F1 performance (93\%). That is, SERIMI achieved the highest accuracy when direct   and class-based matching are combined. S+SR+DM improved upon S+SR because DM could reinforce the similarity between instances when there was a direct overlap between the source and target.
% In fact, overlaps existed in all matching tasks as there were always some comparable predicates between source and target. Thus, there were no cases in which DM performed poorly. However, 
In some cases, such as Nyt-DB-Geo, S+SR achieved much higher F1 than DM (81\% vs. 69\%). The combination of the two, S+SR+DM, could leverage evidences used by both approaches to further improve the results (82\%). While this simple combination led to better results on average, there was one exception where DM yielded better performance (Person11-Person12), and several cases in which S+SR produced better results (Sider-Dailymed, Sider-DB- SideEffect, Sider-DIASEASOME). 

 
Particularly, S and S+SR performed poorly in Person11-Person12 (49\% and 47\%, respectively) because features of the candidate instances are very similar (e.g. they all contain phone, address and are of the type Person). Due to this, CBM produced similar scores for all candidates, which were not sufficiently distinct to separate the correct matches from the incorrect ones. For this task, DM performed better because the overlap between the source and target instances are sufficiently high to identify the correct matches.


\textbf{Jaccard Similarity vs. Set-based Similarity.} Observe also that the use of Jaccard in S+SR+DM+J as set similarity decreased the average F1 from 93\% to 87\%. This confirms our intuition that the commonalities are more relevant than the differences to define similarity in our problem setting. Regarding performance, S+SR+DM+J (53s) was slightly better than S+SR+DM (54s), in average.

\textbf{Task Complexity.} Fig.\ \ref{fig:taskvstime} shows the connection between time performances for S+SR, S+SR+DM and DM and the number of triples in the candidate sets, which captures the amount of data that has to be processed. Clearly, more time was needed when more candidates and data have to be processed. Time performance for all 3 configurations increased quite linearly with a larger amount of data. To assess the complexity from the viewpoint of accuracy, we used the TC measure discussed before. Fig.\ \ref{fig:taskvsf1} shows the connection between F1 performances for S+SR, S+SR+DM and DM and TC. We observed there was a trend between complexity and F1: F1 decreased as complexity increased. Interestingly, we could see many cases, including Person21-Person22 and Nyt-DB-Geo, where S+SR and DM are complementary, i.e. S+SR had a higher performance when DM had a lower performance, and vice-versa. S+SR+DM was most helpful in these cases as it could leverage the complementary nature of these two approaches to improve the results.

 
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{taskvstime.pdf}
\caption{Time performance; tasks are ordered according to the number of triples in the candidate sets.} 
\label{fig:taskvstime}
\end{figure}


Concluding, the highest accuracy is achieved by combining class-based matching with direct matching. Time efficiency was present here only to show that CBM is feasible. In the following experiments, we will use S+SR+DM, in combination with the top-1 approach where there is an one-to-one mapping or the threshold approach otherwise. 




\begin{table*}[]
\centering
\caption{F1 performance for different SERIMI configurations.} 
%\scriptsize\tt
\scriptsize
\begin{tabular}{ | c | c | c | c | c | c || c | c || c | c | c | c | } 
\hline
Datasets & S & S+SR & S+SR+DM & S+SR+DM+J & DM & S+SR+TH & S+SR+TOP1 & S+SR-P & S+SR-D & S+SR-O & S+SR-T \\
\hline
Dailymed-Sider & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0 }& 0.99 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} \\
Diseasome-Sider &  \textbf{0.97}  &  \textbf{0.97} &  \textbf{0.97}  &  \textbf{0.97}  &  \textbf{0.97}  &  \textbf{0.97}  & \textbf{0.97}  &  \textbf{0.97}  &  \textbf{0.97}  &  \textbf{0.97}  &  \textbf{0.97}  \\
Drugbank-Sider & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.98 &\textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} \\
Nytimes-DB-Corp & 0.88 & 0.88 & \textbf{0.91} & 0.85 & 0.83 & 0.78 &\textbf{ 0.88} & 0.87 & \textbf{0.88} & \textbf{0.88} & \textbf{0.88} \\
Nytimes-DB-Geo & 0.81 & 0.81 & \textbf{0.82} & 0.63 & 0.69 & 0.36 & \textbf{0.81} & 0.79 & \textbf{0.81} & \textbf{0.81} & \textbf{0.81} \\
Nytimes-DB-Per & \textbf{0.95} &\textbf{ 0.95} & \textbf{0.95 }& 0.94 & 0.93 & 0.91 & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} \\
Nytimes-Freebase-Corp & \textbf{0.92} & \textbf{0.92} & \textbf{0.92} & 0.84 & \textbf{0.92} & 0.88 & \textbf{0.92} & 0.86 & \textbf{0.92} & \textbf{0.92} & \textbf{0.92} \\
Nytimes-Freebase-Geo & 0.92 & 0.92 & \textbf{0.93} & 0.83 & 0.92 & 0.87 &\textbf{ 0.92} & 0.88 & 0.92 & \textbf{0.93 }& \textbf{0.93 }\\
Nytimes-Freebase-Per & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & 0.93 & \textbf{0.95} & 0.94 & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} \\
Nytimes-Geonames & 0.78 & 0.78 & \textbf{0.87} & 0.49 & \textbf{0.87} & 0.4 & \textbf{0.78} & 0.64 & \textbf{0.78 }& \textbf{0.78} & \textbf{0.78} \\
Person11-Person12 & 0.47 & 0.47 & 0.95 & 0.95 & \textbf{0.97} & \textbf{0.49} & 0.47 & 0.46 & \textbf{0.48} & 0.46 & 0.46 \\
Person21-Person22 & 0.86 & 0.86 &\textbf{ 0.91 }& \textbf{0.91} & \textbf{0.91} & \textbf{0.86} & 0.5 & \textbf{0.86} & \textbf{0.86} & \textbf{0.86} & \textbf{0.86} \\
Rest1-Rest2 & 0.96 & 0.96 & \textbf{0.97} & \textbf{0.97} &\textbf{ 0.97 }& 0.94 & \textbf{0.96 }& \textbf{0.96 }& \textbf{0.96} & \textbf{0.96} & \textbf{0.96} \\
Sider-Dailymed & \textbf{0.83} & 0.81 & 0.74 & 0.55 & 0.72 &\textbf{ 0.81} & 0.56 & 0.73 & \textbf{0.8} & 0.79 & 0.79 \\
Sider-DB-Drugs & \textbf{0.94} & \textbf{0.94 }& \textbf{0.94} & \textbf{0.94} & \textbf{0.94 }& \textbf{0.94} &\textbf{ 0.94} &\textbf{ 0.94} & \textbf{0.94} & \textbf{0.94} &\textbf{ 0.94} \\
Sider-DB-SideEffect & \textbf{0.9} & \textbf{0.9} & 0.89 & 0.89 & 0.89 & 0.89 &\textbf{ 0.9} & \textbf{0.9} & \textbf{0.9} & \textbf{0.9} & \textbf{0.9} \\
Sider-Diseasome & \textbf{0.91} & \textbf{0.91 }& 0.89 & 0.9 & 0.88 & \textbf{0.91 }& 0.91 & \textbf{0.92} & 0.91 & 0.91 & 0.91 \\
Sider-Drugbank & 0.97 & 0.97 & 0.98 & \textbf{0.99} & 0.98 & 0.96 & \textbf{0.97 }& \textbf{0.97 }& \textbf{0.97 }& \textbf{0.97} &\textbf{ 0.97} \\
Sider-Tcm & \textbf{0.99} & \textbf{0.99} & \textbf{0.99} & \textbf{0.99} & \textbf{0.99} &\textbf{ 0.99 }& \textbf{0.99} & \textbf{0.99} & \textbf{0.99 }& \textbf{0.99} & \textbf{0.99} \\
\hline
AVERAGE & 0.90 &	0.89	&	\textbf{0.93}	&	0.87	&	0.91 &		0.84	&	\textbf{0.86}	&	0.88	&	\textbf{0.89}	&\textbf{	0.89}	&	\textbf{0.89}\\
\hline 
\end{tabular}  
\label{table:tablef1s}
\end{table*}  

\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{taskvsf1.pdf}
\caption{F1 for tasks with increasing complexity.} 
\label{fig:taskvsf1}
\end{figure} 
\vspace{-4 mm} 