\section{Class-based Matching: A Solution} 
\label{sec:cbmsolution}
%In this section we introduce our solution for the class-based matching problem. A set of correct matches $M$ can only be determined by class-based matching when the CTP in $M$ can be determined. Although in the real world data the CTP over $M$ may not exist, because real data is heterogeneous ($M$ may have instances from different classes of interest) and incomplete (instances in $M$ may have missing predicates), the class-based method can still be applied, by approximating the notion of CTP.
%, as the similarity between instances in the direct match is approximated by using approximate string matching over the IFP (PIFP). 

We will first present the main idea and then discuss extensions to this basic solution\footnote{The proposed solution is  a possible implementation to solve the CBM problem used only to show the reader the class-based matching (as an approach for instance matching) is as practical as the traditional direct-matching approaches and it is as effective as established approaches. Efficiency concerns are subject of future research.  }.
%This matching is performed between a target instance and a representation of the class of interest $M$. 

\subsection{Basic Solution}
Here we present our implementation of the presented CBM approach. 


\textbf{Class-based Matching.} 
Given a set of instances $S$ and the candidate sets $C(S)=\{C(s_1),\dots, C(s_n)\}$, we implement class-based matching  by finding the instances $t$ from each candidate set (i.e. $t \in C(s) \in C(S)$) that are similar to the candidate sets $C(S)$. 

Our method starts computing a score of similarity between $t \in C(s)$ and  $C(S)$ itself, i.e.,\ $Sim(\{t\}, C(S))$. In this process $C(S)$ is considered the class of interest but not  the solution set $M$; differently from the formal problem definition where $M$ is both the class of interest and a solution set. In this approach, we depart from $C(S)$ to obtain the solution set $M$ and $M(S)$. 
%In Sec. \ref{sec:evaluation}, we empirically studied the accuracy of this method using a benchmark ground truth on instance matching. 
%, therefore avoiding to enumerate all possible $M \in \mathbf{M}$. 

This solution exploits the intuition that given $t$ and any candidate set $C(s) \in C(S)$, if $F(\{t\})$ does not share any feature with  $F(C(s))$, then $t$ is not similar to any instance in this candidate set. If $t$ is not similar to any candidate set $C(s) \in C(S)$, it cannot form a class with any candidate instance; therefore, based on the class-based matching assumption, it cannot be a correct match for $s$. Contrarily, a candidate $t$ that is more similar to other candidate sets are more likely to be form a class to other candidates, and therefore, can be a correct match.  This heuristic  is implemented as follows.


The computation of $Sim(\{t\}, C(S))$ obtains a score for each individual instance  $t \in C$. Then, the  solution set $M$ is composed of $t \in M(s) \subseteq C(s)$, where for all $t \in M(s)$, $Sim(\{t\}, C(S)) > \delta$.   Below, we define $Sim$ and further we describe how we compute the threshold $\delta$.
 
%Given a set of instances $S$ and the set of candidate sets $C(S)=\{C(s_1),\dots, C(s_n)\}$, CBM is implemented by finding the instances $t$ from each candidate set (i.e. $t \in C(s) \in C(S)$) that are similar to $C(S)$. This is performed using the proposed set similarity function, i.e. the similarity between $t \in C(s)$ and $C(S)$ is computed as $Sim(\{t\}, C(S))$. Thus, we use $C(S)$ as a representation of the class of interest in our implementation. 
 
%The final solution set $M$ is composed of $t \in M(s) \subseteq C(s)$, where for all $t \in M(s)$, $Sim(\{t\}, C(S)) > \delta$.  Below, we present the similarity computation and then, describe how to compute the threshold $\delta$:
 
 
\begin{equation}
Sim(t,C(S))=\sum_{C(s') \in C(S)^-}\frac{SetSim(\{t\},C(s'))}{|C(s')|}
\label{eq:urds}
\end{equation}

where $t \in C(s)$ and $C(S)^- = C(S) \setminus C(s)$. 

First, note in Eq.\ \ref{eq:urds}, $t \in C(s)$ is not compared with $C(s)$ but the other candidate sets $C(s')$. $C(s)$ in our implementation is computed via direct matching and thus contains candidates very similar to $t$. Just like the other candidate sets $C(s')$, these candidates also help to capture the class of interest. They however, due to their relative high similarity to $t$, have a too strong impact, compared to $C(s')$. Excluding it from the class similarity computation helps to avoid this strong bias towards $C(s)$. Secondly, 
%This to avoid assigning those candidates $t$ only avoids    candidates that are dissimilar to other candidate sets to obtain larger scores when their features are abundant in $C(s)$. 
note the individual score $SetSim(\{t\},$ $C(s'))$ is weighted by the cardinality of $C(s')$ such that a $C(s')$ with high cardinality has a smaller impact on the aggregated similarity measure. We do this to leverage the observation that small sets contain 
% larger pseudo-homonyms sets contain more noise and 
few but more representative instances. They are better representations of the class of interest. 
%consequently we want that resources more similar to singleton pseudo-homonyms set to have a relative higher final score of similarity.


We further normalize the result of Eq. \ref{eq:urds} by the maximum score among all instances in $C(s)$ as
\begin{equation}
Sim(t,C(s),C(S))= \frac{Sim(t,C(S))}{MaxScore(C(s),C(S))} 
\end{equation}
where
\begin{multline}
MaxScore(C(s),C(S)) = \\MAX\{Sim(t', C(S)) | t' \in C\}
\end{multline}

This yields a class-based similarity score that is in $[0,1]$. This algorithm takes  $O(|C(S)| \times |C|)$ (note $|S|=|C(S)|$), in the worse case.
%, where 0 means not similar and 1 means equal. 
Using this function, an instance $t$ is considered as a correct match for $s$, if $Sim(t,C(s), C(S))$ is higher than a threshold $\delta$ or when it is the top-$1$ result. We will refer to these two variants as the Threshold (for 1-to-many CBM and UCBM) and the Top-1 approach (for CBM), respectively. 

The Top-1 approach makes sense for those cases  where datasets are duplicate-free or one-to-one mapping between a source and a target instance can be guaranteed. In this case, as every instance in every dataset stands for a distinct real-world entity, there exist at most only one correct match in the target for every instance in the source (i.e. likely the top-1). In the other cases where there are one-to-many matches, the Threshold approach is used. Notice that the Threshold is a   general approach. As we will show empirically, it yields competitive accuracy to the Top-1 approach. 
%In other words, there are only one-to-one cross-dataset mappings in these cases.
% That is, there are no matching instances within datasets. Consequently, for every instance in the source, there exists at most one match in the target. 

\begin{figure} 
\vspace{-10pt}
\centering
\includegraphics[width=0.45\textwidth]{computation.pdf}
\caption{(a) Class-based similarity score for the candidate $t_{11}$ is obtained by comparing it with $C(s_2)$ and $C(s_3)$, (b) the score for $t_{11}$ and (c) the scores for all other candidates.}
 \vspace{-2pt}
 \label{fig:computation}
\end{figure} 


Class-based matching is illustrated in Fig.\ \ref{fig:computation} for the instance $t_{11}$, where it is compared to the candidate sets $C(s_2)$ and $C(s_3)$, where $C(S)^- = \{C(s_2), C(s_3)\}$. Notice that, in the end, $Sim(t_{11}, C(S))$  compares the features of $F(\{t_{11}\})$ to  $F(C(s_2))$ and to  $F(C(s_3))$. This is done for all instances in $C(s_1)$ and the one with the highest score $Sim$ is assumed to be the correct match for $s_1$. Notice that for $C(s_2)$, $C(S)^-$ is defined as $C(S)^-= \{C(s_1), C(s_3)\}$. Alg.\ \ref{alg:sim} illustrates the computation of $Sim$.

\begin{algorithm}[h]
\caption{SimScores($C(S)$).}
\begin{algorithmic}[1]
\scriptsize\tt 
\STATE  $scores \leftarrow \emptyset$
\FOR{$c(s) \in C(S)$}
\STATE $C(S)^- \leftarrow C(S) \setminus C(s)$
\STATE  $score_{c(s)} \leftarrow \emptyset$
\FOR{$t \in C(s)$}
\STATE  $score_t \leftarrow 0$
\FOR{$c(s)' \in C(S)^-$}
\STATE  $score_t \leftarrow score_t + \frac{SetSim(\{t\}, C(s)')}{|c(s)'|}$
\ENDFOR
\STATE  $score_{c(s)} \leftarrow score_{c(s)} \cup score_t$
\ENDFOR
\STATE  $scores \leftarrow scores \cup score_{c(s)}$
\ENDFOR 
 \STATE  $maxscore \leftarrow max(scores)$
\FOR{$score_{c(s)} \in scores$}
\FOR{$i$ in $1..|score_{c(s)}| $}
\STATE  $score_{c(s)}[i] \leftarrow  \frac{score_{c(s)}[i]}{maxscore} $
\ENDFOR
\ENDFOR
\RETURN $scores$ 
\end{algorithmic}
\label{alg:sim}
\end{algorithm}
 

%After the similarity is computed for all instances $t$ in this fashion, the instance, in each candidate set, with highest score is selected as a correct match for $s$.   
%In Section 5, we evaluate a configuration of SERIMI that combines direct matching with class-based matching. In this configuration, we simply multiple both scores, normalize them between [0,1], and selected the correct matches $M$ using the Top-1, or Threshold approach.  



%In the following, we will elaborate on a more efficient way to compute the class-based similarity score as well as an algorithm to automatically select the threshold $\delta$.

%We found that a value for $\delta$ that performs well is the maximum of the means and medians of the scores obtained for all instances in $C(S)$, which we will refer to as $\delta_{m}$. In the experiment, we tested using the different settings $\delta = \delta_m$, $\delta = 1.0$, $\delta = 0.95$, $\delta = 0.9$ and $\delta = 0.85$. Also, we evaluated different top-$k$ settings, where only the top-1, top-2, top-5 and top-10 matches were selected. 

 
%For implementing $\sim_S$, we need to capture the similarity between sets of instances $X_1$ and $X_2$, which is realized by the function we call $SetSim$ as follows:
\textbf{Similarity Function.} Now, we introduce $SetSim(X_1,X_2)$ to compute the similarity between two sets of instances $X_1$ and $X_2$ based on their sets of features $F(X_1)$ and $F(X_2)$:

\begin{equation}
SetSim(X_1,X_2)=FSSim(F(X_1),F(X_2))
\end{equation} 

% (no matter the number of features they are associated with). Our hypothesis is that the commonalities is one order of magnitude more relevant than the differences on defining similarity in our problem setting. In Sec \ref{sec:evaluation}, we verify empirically that $FSSim$ beats the common Jaccard set similarity by a consistent margin.
%Based on Tversky's contrast model \cite{tversky_features_1977}, 

where $FSSim(F(X_1),F(X_2))$ is a function capturing the similarity between $F(X_1)$ and $F(X_2)$. 

Early work such as   Tversky's \cite{tversky1977features} shows that the similarity of a pair of items depends both on their commonalities and differences. This intuition is exploited by similarity functions used for instance matching, which like Jaccard similarity, gives the \emph{same weight} to commonalities and differences. 

%This is suitable for matching instances because commonalities help to infer that two instances might be the same while differences support the conclusion that they are not. 

We depart from the equal-weight strategy to give a \emph{greater emphasis on commonalities}. This is because the goal of class-based matching is to find whether some instances match a class, which by our definition, is the case when they share many features with that class. %We do so because the amount of features that a class of instances have in common  is typically small compared the amount of features that are specific to individual instances. 
For deciding whether an instance belongs to a class or not, the common features are thus, by definition, more crucial.  Not only that, the special treatment of common features also makes sense when considering that common features are more scarce. That is, the number of features shared by all instances in a class is typically much smaller than features that are not. 

%Features that are specific to individual instances are less representative for the class, and also convey more noise, due to their abundance. 
We propose the following function for this intuition:

\begin{equation}
\footnotesize
FSSim(f_1,f_2) = \left\{ 
  \begin{array}{ll}
     0    \text{    if } |f_1\cap f_2|=0 \\
     |f_1\cap f_2| - (\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|}  ) &  \text{otherwise}
  \end{array} \right. 
\label{eq:setsimsr}
\end{equation}
where $f_1$ and $f_2$ stand for $F(X_1)$ and $F(X_2)$, respectively. $FSSim(f_1,f_2)$ only considers $f_1$ and $f_2$ to be similar when there exist some commonalities (i.e.\ $FSSim(f_1,f_2)$=0 if $|f_1\cap f_2|=0$). The first term $|f_1\cap f_2|$ has a much larger influence, capturing commonalities as the number of overlaps between $f_1$ and $f_2$, which is always larger than 1. The second term $(\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|})$, capturing the differences, is always smaller than 1. In fact, given $f_j$ and $f_k$ that have $n$ and $n-1$ features in common with  $f_i$, respectively, $FSSim$ always returns a higher score for $f_j$. 

For example, assuming $f_1=F($\{\verb+db:Belmont_California+\}$)$, $f_2=F($\{\verb+db:Belmont_France+\}$)$ and $f_3=F(C$(\verb+nyt:5555+)); then, $FSSim(f_1, f_3)  = 3.65$, while $FSSim(f_2, f_3) = 1.5$. The scores reflect  the fact that $f_1$ has 4 features in common with $f_3$, while $f_2$ has only 2.

%Notice that $FSSim$ does not capture any class semantics but is simply a set similarity function that is used to compute the membership of an instance to a class. 
Notice that $FSSim$ does not capture any class semantics but is  a set similarity function tailored towards the commonalities, for supporting the intuition discussed before. However, the class semantics is inferred as a result of applying this similarity computation as performed in our approach: the instances found by CBM  form a class that corresponds to the class of interest. 
% i.e. $FSSim(f(t_i), f(M^*(S))) > FSSim(f(t_j), f(M^*(S)))$. 


The bias towards commonalities is captured by the following theorem, which does not hold for the Jaccard function (see Appendix A):  

\begin{theorem}
If $|f_i\cap f_j| > |f_i \cap f_k|$ then $FSSim(f_i,f_j) > FSSim(f_i,f_k)$.
\label{theorem:t1}
\end{theorem}

%We acknowledge that a fine-grained weighting of the features may improve the method; however, this requires an non-trivial solution, to be consider as future research. 

%the differences are only important in our case when two distinct instances have the same number of common features with the class of interest. In this case, if any of them has a large number of distinct features it should be consider less similar the other with less distinct features, because the difference in this particular case is an indication of dissimilarity. Therefore, to capture this notion of similarity, we propose FSSim, a similarity function that increases the similarity as the commonalities increases.  We define $FSSim$ as:

%To our problem setting, we observed that $FSSim$ had a better performance than the popular Jaccard similarity. It happens because Jaccard gives the same weight to the commonalities and differences, in contrary to $FSSim$. In appendix A we prove that  Jaccard violates Theorem \ref{theorem:t1} in contrary to $FSSim$.

