\section{Class-based Matching} 
%In this section we introduce our solution for the class-based matching problem. A set of correct matches $M$ can only be determined by class-based matching when the CTP in $M$ can be determined. Although in the real world data the CTP over $M$ may not exist, because real data is heterogeneous ($M$ may have instances from different classes of interest) and incomplete (instances in $M$ may have missing predicates), the class-based method can still be applied, by approximating the notion of CTP.
%, as the similarity between instances in the direct match is approximated by using approximate string matching over the IFP (PIFP). 

We will first present the main idea and then discuss extensions to this basic solution.
%This matching is performed between a target instance and a representation of the class of interest $M$. 

\subsection{Basic Solution}
\textbf{Features.} We use an instance-based representation of the class of interest. That is, both the target instance and the class of interest can be modeled via the instance representation $IR(G,X)$. In particular, we extract the following information from $IR(G,X)$ to use as features. 

\begin{definition}[Features] Let $G$ be a dataset and $X$ be a set of instances in $G$. We employ the following sets of features to model $X$: 

\begin{itemize}
%\setlength{\itemsep}{-4pt}
	\item $A(X) = \{p | (s, p, o) \in IR(G, X) \land s \in X\}$,
  \item $D(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in L \}$,
	\item $O(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in U \}$, 
	\item $T(X) = \{(p,o) | (s, p, o) \in IR(G, X) \land s \in X \}$.
%	\vspace{-4pt}
\end{itemize}
The combined set of features of $X$ is 
 $$F(X) = A(X) \cup D(X)\cup O(X)\cup T(X)$$
\end{definition}  

Intuitively, $A(X)$ is the set of predicates, $D(X)$ the set of literals, $O(X)$ the set of URIs, and $T(X)$ is the set of predicate-object pairs that appear in the representation of $X$. Typically, only predicate-object pairs are used for direct matching. For our problem of class-based matching, we use these features not only to represent instances but also classes. Hence, not only instance-specific but also class-related features such as $A(X)$, are useful. 
%We use all these features because real data is incomplete and to approximate the notion of CTP  we need to consider any features that candidates may have in common. 
For example, the predicate \verb+geo:long+ is a good descriptor for the class location, while \verb+corp:revenue+ is a good feature for representing the class company.  
%The co-occurrence of \verb+geo:long+ in most of the candidates could help to identify that the class of interest is a location and not a company. Empirically we observed that all these features are essential to approximate CTP.

Considering  $X=$\{\verb+db:Belmont_California+\}, its features would be: $A(x)=$\{\verb+rdfs:label+, \verb+db:country+\}, $D(x)=$ \{ 'Belmont'\}, $O(x)=$\{\verb+db:Usa+\}, and $T(x)=$\{(\verb+rdfs:+ \verb+label+, 'Belmont'), (\verb+db:country+, \verb+db:Usa+)\}. Consequently, $F(X)=$\{ \verb+rdfs:label+, \verb+db:country+, 'Belmont', \verb+db:Usa+\, (\verb+rdfs:+ \verb+label+, 'Belmont'), (\verb+db:country+, \verb+db:Usa+)\}. Note for a candidate set $C(s)$ with more than one element, $F(C(s))$ is simply the union of the features of the constituent instances. For example, $F(C(\verb+nyt:5555+))=$\{\verb+rdfs:label+, \verb+db:country+, \verb+db:+ \verb+locatedIn+, 'San Jose', \verb+db:Usa+, \verb+db:California+, \verb+db:Costa_+\verb+Rica+,  (\verb+rdfs:+\verb+label+, 'San Jose'), (\verb+db:country+, \verb+db:Usa+), (\verb+db:+ \verb+locatedIn+, \verb+db:California+),  (\verb+db:country+, \verb+db:Costa_Rica+)\}.

 
%For implementing $\sim_S$, we need to capture the similarity between sets of instances $X_1$ and $X_2$, which is realized by the function we call $SetSim$ as follows:
\textbf{Similarity Function.} Targeting the class-based matching problem, we introduce $SetSim(X_1,X_2)$ to capture the similarity between two sets of instances $X_1$ and $X_2$ based on their set of features $F(X_1)$ and $F(X_2)$:

\begin{equation}
SetSim(X_1,X_2)=FSSim(F(X_1),F(X_2))
\end{equation} 

% (no matter the number of features they are associated with). Our hypothesis is that the commonalities is one order of magnitude more relevant than the differences on defining similarity in our problem setting. In Sec \ref{sec:evaluation}, we verify empirically that $FSSim$ beats the common Jaccard set similarity by a consistent margin.
%Based on Tversky's contrast model \cite{tversky_features_1977}, 

where $FSSim(F(X_1),F(X_2))$ is the function capturing the similarity between the two sets of features $F(X_1)$ and $F(X_2)$. 

Early work such as the one by Tversky \cite{tversky1977features} shows that the similarity of a pair of items depends both on their commonalities and differences. This intuition is applied in similarity functions used for instance matching, which like Jaccard similarity, gives the \emph{same weight} to commonalities and differences. This is suitable for matching instances because commonalities help to infer that two instances might be the same while differences support the conclusion that they are not. For class-based matching, we depart from this to give a \emph{greater emphasis on commonalties}. We do so because the amount of features that a class of instances have in common  is typically small compared the amount of features that are specific to individual instances. For deciding whether an instance belong to a class or not, we deem the common features to be more characteristic for the class. Also, they are more distinctive due to their scarcity. Features that are specific to individual instances are less representative for the class, and also convey more noise, due to their abundance. We propose the following function to support this intuition:

\begin{equation}
\footnotesize
FSSim(f_1,f_2) = \left\{ 
  \begin{array}{ll}
     0    \text{    if } |f_1\cap f_2|=0 \\
     |f_1\cap f_2| - (\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|}  ) &  \text{otherwise}
  \end{array} \right. 
\label{eq:setsimsr}
\end{equation}
where $f_1$ and $f_2$ stand for $F(X_1)$ and $F(X_2)$, respectively. $FSSim(f_1,f_2)$ only considers $f_1$ and $f_2$ to be similar when there exist some commonalities (i.e.\ $FSSim(f_1,f_2)$=0 if $|f_1\cap f_2|=0$). The first term $|f_1\cap f_2|$ has a much larger influence, capturing commonalities as the number of overlaps between $f_1$ and $f_2$, which is always larger than 1. While the second term $(\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|})$, capturing the differences, is always smaller than 1. In fact, given $t_j$ and $t_k$ that have $n$ and $n-1$ features in common with  $t_i$, respectively, $FSSim$ always returns a higher score for $t_j$. 
% i.e. $FSSim(f(t_i), f(M^*(S))) > FSSim(f(t_j), f(M^*(S)))$. 

For example, assuming $f_1=F($\{\verb+db:Belmont_California+\}$)$, $f_2=F($\{\verb+db:Belmont_France+\}$)$ and $f_3=F(C$(\verb+nyt:5555+)); then, $FSSim(f_1, f_3)  = 3.65$, while $FSSim(f_2, f_3) = 1.5$. The scores reflect  the fact that $f_1$ has 4 features in common with $f_3$, while $f_2$ only 2.

This bias towards commonalities is captured by the following theorem, which does not hold for the Jaccard function (see Appendix A):  

\begin{theorem}
If $|f_i\cap f_j| > |f_i \cap f_k|$ then $FSSim(f_i,f_j) > FSSim(f_i,f_k)$.
\label{theorem:t1}
\end{theorem}

Note the proposed function does not completely neglect the role of differences. In particular, when two instances have the same number of overlaps with a class, their differences to that class decide which ones is a better match. 
%the differences are only important in our case when two distinct instances have the same number of common features with the class of interest. In this case, if any of them has a large number of distinct features it should be consider less similar the other with less distinct features, because the difference in this particular case is an indication of dissimilarity. Therefore, to capture this notion of similarity, we propose FSSim, a similarity function that increases the similarity as the commonalities increases.  We define $FSSim$ as:

%To our problem setting, we observed that $FSSim$ had a better performance than the popular Jaccard similarity. It happens because Jaccard gives the same weight to the commonalities and differences, in contrary to $FSSim$. In appendix A we prove that  Jaccard violates Theorem \ref{theorem:t1} in contrary to $FSSim$.


\textbf{Class-based Matching.} 
%We propose an approximate solution the the class-based matching problem. To find the best $M(S)$, our solution compare every $t \in C(s) \in C(S)$ to every set $C(S)^- = C(S) \setminus C(s)$. Then we select $t$ from $C(s)$ such that $Sim(t,C(s),C(S)^-) > \delta$. The intuition exploited in this solution is that a correct match for $C(s)$ should be the most similar instance to the others candidates sets. Both $t$ and $C(s)$ are represented as a set of features and the similarity score $Sim(t,C(s),C(S)^-)$ is compute using a set-based similarity function.

Given a set of instances $S$ and the candidate sets $C(S)=\{C(s_1),\dots, C(s_n)\}$, we formulate class-based matching as the one of finding the instances $t$ from each candidate set (i.e. $t \in C(s) \in C(S)$) that are similar to the candidate sets $C(S)$. 

Our method starts computing a score of similarity between $t \in C(s)$ and  $C(S)$ itself, i.e.,\ $Sim(\{t\}, C(S))$. In this process $C(S)$ is considered the class of interest but not  the solution set $M(S)$; differently from the formal problem definition where $M(S)$ is both the class of interest and a solution set. In this approximation, we depart from $C(S)$ to obtain the solution set $M(S)$, therefore avoiding to enumerate all possible $M(S) \in \mathbf{M}$, as discussed before. 

The computation of $Sim(\{t\}, C(S))$ obtains a score for each individual instance  $t \in C(s) \in C(S)$. Then, the final solution set $M(S)$ is composed of $M(s) \subseteq C(s)$, where for all $t \in M(s)$, $Sim(\{t\}, C(S)) > \delta$.   Below, we define $Sim$ and further we describe how we compute the threshold $\delta$.

%In this proposed approximation to the problem, $C(S)$ is considered only as an initial class of interest; differently from the formal problem definition where $M(S)$ is both the class of interest and a solution set. We depart from $C(S)$ to obtain the solution set $M(S)$, therefore avoiding to enumerate all possible $M(S) \in \mathbf{M}$, as discussed before.
%Starting from this point, we compute a score from each candidate instance $t \in C(s) \in C(S)$ using $Sim(t,C(S)^-)$,i.e.,\ $SetSim(\{t\}, C(s')), \forall C(s') \in C(S)^-$.  Further those instances with a score higher than a threshold $\delta$ are select as a final  solution set $M(S)$.

 
\begin{equation}
Sim(t,C(S))=\sum_{C(s') \in C(S)^-}\frac{SetSim(\{t\},C(s'))}{|C(s')|}
\label{eq:urds}
\end{equation}

where $t \in C(s)$ and $C(S)^- = C(S) \setminus C(s)$. 

Observe that in Eq.\ \ref{eq:urds}, $t \in C(s)$ is only compared to the complement sets of $C(s)$. This avoids    candidates that are dissimilar to other candidate sets to obtain larger scores when their features are abundant in $C(s)$. Intuitively, Eq.\ \ref{eq:urds} captures the comparisons between $t$ and candidate sets in $C(S)^-$ where the individual score $SetSim(\{t\},$ $C(s'))$ is weighted by the cardinality of $C(s')$ such that a $C(s')$ with high cardinality has a smaller impact on the aggregated similarity measure. We do this to capture the intuition that
% larger pseudo-homonyms sets contain more noise and 
small sets containing only a few representative instances (captured by only a few features) represent better  the class of interest. 
%consequently we want that resources more similar to singleton pseudo-homonyms set to have a relative higher final score of similarity.


We further normalize the result of Eq. \ref{eq:urds} by the maximum score among all instances in $C(s)$ as
\begin{equation}
Sim(t,C(s),C(S))= \frac{Sim(t,C(S))}{MaxScore(C(s),C(S))} 
\end{equation}
where
\begin{multline}
MaxScore(C(s),C(S)) = \\MAX\{Sim(t', C(S)) | t' \in C(s) \in C(S)\}
\end{multline}

This yields a class-based similarity score that is in $[0,1]$. 
%, where 0 means not similar and 1 means equal. 
Using this function, an instance $t$ is considered as a correct match for $s$, if $Sim(t,C(s), C(S))$ is higher than a threshold $\delta$ or when it is the top-$1$ result. We will refer to these two variants as the Threshold and the Top-1 approach, respectively. 

The Top-1 approach makes sense for those cases in the heterogeneous scenario, where datasets are duplicate-free or one-to-one mapping between a source and a target instance can be guaranteed. In this case, as every instance in every dataset stands for a distinct real-world entity, there exist at most only one correct match in the target for every instance in the source (i.e. likely the top-1). In the other cases where there are one-to-many matches, the Threshold approach is used. Notice that the Threshold approach can also be used in the one-to-one matching scenario. As we will show empirically, it yields competitive accuracy to the Top-1 in these cases. 
%In other words, there are only one-to-one cross-dataset mappings in these cases.
% That is, there are no matching instances within datasets. Consequently, for every instance in the source, there exists at most one match in the target. 

\begin{figure} 
\vspace{-10pt}
\centering
\includegraphics[width=0.45\textwidth]{computation.pdf}
\caption{(a) Class-based similarity score for the candidate $t_{11}$ is obtained by comparing it with $C(s_2)$ and $C(s_3)$, (b) the score for $t_{11}$ and (c) the scores for all other candidates.}
 \vspace{-2pt}
 \label{fig:computation}
\end{figure} 


Class-based matching is illustrated in Fig.\ \ref{fig:computation} for the instance $t_{11}$, where it is compared to the candidate sets $C(s_2)$ and $C(s_3)$, where $C(S)^- = \{C(s_2), C(s_3)\}$. Notice that, in the end, $Sim(t_{11}, C(S))$  compares the features of $F(\{t_{11}\})$ to  $F(C(s_2))$ and to  $F(C(s_3))$. This is done for all instances in $C(s_1)$ and the one with the highest score $Sim$ is assumed to be the correct match for $s_1$. Notice that for $C(s_2)$, $C(S)^-$ is defined as $C(S)^-= \{C(s_1), C(s_3)\}$. Alg.\ \ref{alg:sim} illustrates the computation of $Sim$.

\begin{algorithm}[h]
\caption{SimScores($C(S)$).}
\begin{algorithmic}[1]
\scriptsize\tt 
\STATE  $scores \leftarrow \emptyset$
\FOR{$c(s) \in C(S)$}
\STATE $C(S)^- \leftarrow C(S) \setminus C(s)$
\STATE  $score_{c(s)} \leftarrow \emptyset$
\FOR{$t \in C(s)$}
\STATE  $score_t \leftarrow 0$
\FOR{$c(s)' \in C(S)^-$}
\STATE  $score_t \leftarrow score_t + \frac{SetSim(\{t\}, C(s)')}{|c(s)'|}$
\ENDFOR
\STATE  $score_{c(s)} \leftarrow score_{c(s)} \cup score_t$
\ENDFOR
\STATE  $scores \leftarrow scores \cup score_{c(s)}$
\ENDFOR 
 \STATE  $maxscore \leftarrow max(scores)$
\FOR{$score_{c(s)} \in scores$}
\FOR{$i$ in $1..|score_{c(s)}| $}
\STATE  $score_{c(s)}[i] \leftarrow  \frac{score_{c(s)}[i]}{maxscore} $
\ENDFOR
\ENDFOR
\RETURN $scores$ 
\end{algorithmic}
\label{alg:sim}
\end{algorithm}
 

%After the similarity is computed for all instances $t$ in this fashion, the instance, in each candidate set, with highest score is selected as a correct match for $s$.   
%In Section 5, we evaluate a configuration of SERIMI that combines direct matching with class-based matching. In this configuration, we simply multiple both scores, normalize them between [0,1], and selected the correct matches $M$ using the Top-1, or Threshold approach.  



%In the following, we will elaborate on a more efficient way to compute the class-based similarity score as well as an algorithm to automatically select the threshold $\delta$.

%We found that a value for $\delta$ that performs well is the maximum of the means and medians of the scores obtained for all instances in $C(S)$, which we will refer to as $\delta_{m}$. In the experiment, we tested using the different settings $\delta = \delta_m$, $\delta = 1.0$, $\delta = 0.95$, $\delta = 0.9$ and $\delta = 0.85$. Also, we evaluated different top-$k$ settings, where only the top-1, top-2, top-5 and top-10 matches were selected. 


