% The Computer Society usually requires 10pt for submissions.
%
\documentclass[10pt,journal,compsoc]{IEEEtran}
%
%\hyphenation{op-tical net-works semi-conduc-tor}

\usepackage{amsmath}
\usepackage{color}
\usepackage[]{graphicx}
\usepackage{balance} 
\usepackage{setspace} 
\usepackage{spverbatim} 
\usepackage{algorithm}
\usepackage{algorithmic} 
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newtheorem{myproof}{Proof}
\newtheorem{lemma}{Lemma}
\newcommand{\argmax}{\operatornamewithlimits{argmax}}
\definecolor{question}{RGB}{25,25,112}
%\definecolor{highlight}
\newcommand{\todo}[1]{\textcolor{red}{@TODO: #1}}
\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{SERIMI: Class-based Matching for Instance Matching Across Heterogeneous Datasets}

\author{Samur Araujo, Duc Thanh Tran, Arjen P. de Vries and Daniel Schwabe 
                 
% note need leading \protect in front of \\ to get a newline within \thanks as
% \\ is fragile and will error, could use \hfil\break instead.
 
 }

 
\IEEEcompsoctitleabstractindextext{%
 

}


% make the title area
\maketitle

\section{Responses to All Reviewers}
First of all, we thank  all reviewers for their valuable comments. We tried to take them all into account to improve the paper. We immediately acknowledge all the textual suggestions and we have fixed them. The other issues are answered below. 

%We firstly provide responses to issues raised by all reviewers.

We restructured the introduction to make the examples more clear. Also we restructured section 2 (Overview of the Approach) and Section 3 (Class-Based Matching) into 4 different sections aiming to clarify the definitions, the approach, the class-based matching problem and the proposed solution. The 4 Sections are: 

\begin{itemize}

\item 2 Preliminary Definitions 
\item  3 Overview of The Approach
\item  4 Class-Based Matching: The Problem
\item  5 Class-Based Matching: A Solution

\end{itemize} 

Other modifications added into the paper to address a particular reviewer's question will be mentioned in the reviewer's response. Response texts marked in blue are not included in the paper.

 
\section{Responses to Reviewer 1}
 
\textit{\textbf{Issue 1: }} \textit{Sec 1: I can't figure out whether the example in Table 1 is a source dataset or a target dataset, or the
authors put them together. Please clarify}

\textbf{Response. } \textcolor{question}{Table 1 modified in the introduction.}

\begin{table}[ ]
\centering
\caption{Instances represented as RDF triples.}
%\scriptsize\tt
\scriptsize
%\small
\begin{tabular}{|l|l|l|}
\hline  
\multicolumn{3}{|c|}{Source Dataset} \\
\hline
Subject & Predicate/Attribute & Object/Value \\
\hline
nyt:2223 & rdfs:label & 'San Francisco' \\
nyt:5962 & rdfs:label & 'Belmont' \\
nyt:5962 & geo:lat & '37.52' \\
nyt:5555 & rdfs:label & 'San Jose' \\
nyt:4232 & nyt:prefLabel & 'Paris' \\
 

geo:525233 & rdfs:label & 'Belmont' \\ 
geo:525233   & in:country & geo:887884 \\ 
geo:525233  & geo:lat & '37.52' \\
\hline

\multicolumn{3}{|c|}{Target Dataset} \\
\hline
Subject & Predicate/Attribute & Object/Value \\
\hline
    db:Usa & owl:sameas & geo:887884 \\ 
  db:Paris & rdfs:label & 'Paris' \\ 
  db:Paris  & db:country & db:France \\
db:Belmont\_France & rdfs:label & 'Belmont' \\ 
db:Belmont\_France  & db:country & db:France \\  
db:Belmont\_California & rdfs:label & 'Belmont' \\ 
db:Belmont\_California  & db:country & db:Usa \\  
 
db:San\_Francisco & rdfs:label & 'San Francisco' \\ 
db:San\_Francisco   & db:country & db:Usa \\ 
db:San\_Francisco     & db:locatedIn & db:California \\ 
  db:San\_Jose\_California & rdfs:label & 'San Jose' \\ 
  db:San\_Jose\_California     & db:locatedIn & db:California \\ 
    db:San\_Jose\_Costa\_Rica & rdfs:label & 'San Jose' \\ 
   db:San\_Jose\_Costa\_Rica  & db:country & db:Costa\_Rica \\ 
  \hline 
  \end{tabular} 
\label{table:examples}
\vspace{-10px}
\end{table}


\textit{\textbf{Issue: }} \textit{The db:Belmount\_France example does not help me understand either. Why is it related to "class-based matching"? Is it in a heterogeneous setting? What's the drawback of the direct matching methods when dealing with it?}

\textbf{Response. }  \textcolor{question}{The example problem is used to illustrate that direct matching does not work well in heterogeneous scenarios in which the overlap between datasets is small. We show later that using class-based matching helps to address such a problem. We modified several paragraphs in the introduction to clarify the example. The new text is as follows:}

 \textit{Semantic-driven approaches}  use specific OWL semantics, such as explicit \verb+owl:sameas+ statements, to allow the same-as relations to be inferred via logical reasoning. 
%Clearly, this type of approaches is only effective when datasets are represented in OWL and capture the semantics necessary for reasoning. 
Complementary to this,  \textit{data-driven approaches}   derive same-as relations mainly based on attribute values of instances. 
%For instance, \verb+nyt:5962+ is recognized as being the same as \verb+db:BelmontCalifornia+ because they both have 'Belmont' as \verb+rdfs:label+. 
While they vary with respect to the selection and weighting of features, existing data-driven approaches are built upon the same paradigm of \textit{direct matching}, namely, two instances are considered the same when they have many attribute values in common. 
%By direct matching two instance representations, they refer to the same real word entity if their similarities exceed a threshold. 
Hence, they produce only high quality results when there is sufficient overlap between instance representations. Overlap may, however,   be small in heterogeneous datasets; especially, because the same instance represented in two distinct datasets may not use the same schema.



For example, in Table \ref{table:examples}, the source instance \verb+nyt:5962+ and the target instances   \verb+db:Belmont_France+ and \verb+db:Belmont_California+ share the same \verb+rdfs:label+ value, i.e., the string 'Belmont' (see  Fig. \ref{fig:graphexample1}). However, \verb+rdfs:label+ is the only attribute whose values overlap across both datasets, as the source and target graphs use rather distinct schemas. This overlap alone is not sufficient to determine whether \verb+nyt:5962+ is the same as  \verb+db:Belmont_France+ (or \verb+db:Belmont_California+). In this scenario of \emph{instance matching across heterogeneous datasets}, direct matching alone   cannot be expected to deliver high quality results.


%In order to find whether the instances match, existing approaches directly match information from the source against the target. This matching might be based on finding attributes and values (features) the instances have in common or computing the similarities between the instances using various functions and thresholds. However, the number of features they have in common is small and also the similarity is too low to find matches in this \emph{heterogeneous} scenario. 
%whether \verb+nyt:5962+ is the same as  \verb+db:Belmont_France+ (or \verb+db:Belmont_California+). direct matching alone cannot deliver high quality results.
%to find matches in this scenario.  of \emph{instance matching across heterogeneous datasets},  if the overlap is small, the similarity small overlap alone is not sufficient to 

\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{dm.pdf}
\caption{Examples of instances that share a common attribute value.} 
\label{fig:graphexample1}
\end{figure} 

%Some authors\cite{DBLP:conf/www/HuCQ11} propose to combine the semantic-driven and data-driven approaches to obtain the best of both worlds. For instance, in Table 1, we could find the match for \textit{geo:525233} is \textit{db:Belmont\_California} by direct matching their \textit{rdfs:label} and reinforce the match by inferring that \textit{db:Usa} is the same than \textit{geo:887884}. 
%
%Although these approaches are feasible, the question about why and when they work, still remains. Due heterogeneous nature of the Semantic Web data, which is incomplete, noisy and diverse, an import question for instance matching is:  What characteristics the data must have to the data-driven or semantic-driven approach work with its highest accuracy? Is there any other data characteristic that can be exploited to improve the accuracy of those methods?

\textbf{Contributions.} We provide a (1) \emph{detailed analysis} of many datasets and matching tasks investigated in the OAEI 2010 and 2011 \cite{DBLP:conf/semweb/EuzenatFHHMNRSSSST11} instance matching benchmarks. We show that tasks greatly vary in their complexity. There are difficult tasks with a small overlap  between datasets that cannot be effectively solved using state-of-the-art direct matching approaches. 
%It is based on the coverage and discriminative power of the instances' predicates. This complex measure is used to select the necessary and sufficient combination of predicates so that the overlapping of information between instance representations is maximized. Consequently producing the highest matching accuracy.
Aiming at these tasks, we propose to use direct matching in combination with (2) \textit{class-based matching (CBM)}. 
%An unsupervised method complementary to direct matching and semantic matching approaches. 
%It can be applied in combination with the direct matching approaches. 
%specially when the direct matching cannot solve ambiguity in the data due to the lack of overlapping information. 

%Often, there is type information available in data such as RDF, capturing the class(es) to which instances belong to. However, even when class information is complete (which cannot be assumed), classes from different datasets greatly vary: for instance, \ Nation and \ Country might be given as classes for the data in our example, however they are associated with completely different attributes, except for \verb+rdfs:label+. For the heterogeneous setting, we do not assume these explicitly defined classes are available or useful for matching. Instead, we infer classes from the data.
In this paper, we employ the following class notion: a class is  set of instances where each instance in this set must share at least one feature  in common  with any other  instance in this set. 

Based on this notion, CBM works as follows: given a class  of instances from the source dataset (e.g., \verb+nyt:2223+ and \verb+nyt:5962+), called the \emph{class of interest}, and a set of candidate matches retrieved from the target via direct matching (e.g.,  \verb+db:San_Francisco+, \verb+db:Belmont_France+ and \verb+db:Belmont_+\verb+California+), CBM aims to refine the set of candidates by filtering out those that do not match the class of interest. This matching is however not assume that the class semantics  are explicitly given so that a direct matching at the class level is possible between the source (e.g.\ Nations) and target (e.g.\ Countries).
%method infer the Sameas relations by detecting a class of target instances among those candidates that contains at least one match of each source instance. 
Instead, CBM is based on this idea: given the instances are known to form a class (they have some features in common), their matches should also form a class in the target dataset (matches should also have some features in common). Thus, correct matches can be found by computing the subset of candidates in which members have the most features in common. Because these candidates correspond to source instances (as computed by the direct matching method), the class they form correspond to the source instance, i.e. the instances found by CBM belong to a class, which matches the class of interest. Note that in this process, the source and target instances are compared only during the candidate selection step. During class-based matching, \emph{only data from the target dataset} is needed. This is the main difference to direct matching, which compares data from the source with data from the target. 

%\begin{figure}[h]
%\centering
%\includegraphics[width=0.3\textwidth]{cbm.pdf}
%\caption{A set of instances with common features defines a  class, implicitly. A set of source that forms a class should match to a  set of %target instances that also forms a class. } 
%\label{fig:cbm}
%\end{figure} 

%Instead, it is a data-driven approach, which derives the class of interest from information in the target. Then, candidates in the target are compared with this latent representation of the class of interest. During this process, there is no comparison between source and target but only data from the target is used for matching. 

%For example, in Table \ref{table:examples}, the instances \verb+nyt:2223+ and \verb+nyt:5962+ from the source dataset belong to the (implicit) class ``cities in California''. The candidates matches from the target dataset   are \verb+db:San_Francisco+, \verb+db:Belmont_France+ and \verb+db:Belmont_+ \verb+California+, as depicted in Fig.\ \ref{fig:graphexample2}.

\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{cbmexample.pdf}
\caption{Class-based matching.} 
\label{fig:graphexample2}
\end{figure} 

In the example depicted in Fig. \ref{fig:graphexample1}, class-based matching would select   \verb+db:Belmont_California+  and \verb+db:San_+ \verb+Francisco+ as correct matches, because this subset of instances are the most similar among the candidates: they have the predicate \verb+db:country+ and value \verb+db:Usa+ in common, as depicted in Fig. \ref{fig:graphexample2}.


 

\textit{\textbf{Issue 2: }}  \textit{Sec 2: It's quite difficult to understand why the class-based instance matching works. I think the authors really need some effort to restructure this section. Some definitions and equations, e.g., Eq. 3, is worth elaborating so the readers can capture the intuition of the proposed framework. }

\textbf{Response. }  \textcolor{question}{As previously presented, we have modified the introduction to better capture the intuition behind CBM.  Also, we have restructured other parts: previously, Section 2 contains preliminary definitions (is now Section 2 in the modified version) and an overview of the approach (is now Section 3). The overview previously also contains a formal introduction of CBM. We decided to remove it from the overview (now the overview in Section 3 only contains direct matching and a brief presentation of SERIMI). That formal introduction has been largely modified to address all reviewers comments and put under a separate Section 4. We now present this newly edited Section 4 and discuss how we addressed the reviewers comment:}

\section{Class-Based Matching}


%\textbf{Class-Based Matching.} 
 
%\subsection{The problem}
%
%Given the source instances $S$ and their candidate instances $C$, class-based matching  is the problem of finding the correct matches $M(s)$ to $s \in S$ when $s \in S$ and $t \in C$ share insufficient features for reliable direct matching. The  information  considered to find   matches $M(s)$   is therefore only the features of the  candidate instances  $t \in C$. 
%
%Particularly, class-based matching is build upon the observation that matching is performed for a class of source instances. That is, all $s \in S$ belong to a specific class\footnote{Notice that when the input $S$ captures different classes, it can be partitioned into sets of instances representing specific classes \cite{typifier}.}.  Our assumption is that if $S$ is a class (its members share common features), then the set of correct matches   for $s \in S$ should also belong to a class, i.e., the correct target   matches $M^*= \{t | t \in M(s) \in M(S) \}$ should also share some common features among themselves. 
 %
%Using this assumption, the challenge of finding $M^*$ can be solved by finding the subset $M \subseteq C$ that forms the most concise class, i.e.\ where the similarities of the instances in $M$ are maximized, or, in the optimal case, $M=M^*$.  
%
%Class-based matching does not directly compare $s \in S$ with a candidate $t \in C$. Rather, it determines whether $t$ is a match or not based on its class membership, i.e. whether it ``belongs to'' $M^*$. Here, $M^*$ acts as an idealized instance-based representation of the target class of interest. In practice, $M^*$ is not given but stands for the actual result to be determined by instance matching. 
%
%
%In the current example, the instance in the set $M_1=$ \{\verb+db:Belmont_California+, \verb+db:San_Francisco+ and \verb+db:San_Jose_+\verb+California+\}  are more similar to each other than those from the set $M_2=$ \{\verb+db:Belmont_France+, \verb+db:San_Francisco+  and \verb+db:San_Jose_+\verb+California+\}. We say, therefore, $M_1$  is a more concise class than $M_2$. Precisely,  the candidate \verb+db:Belmont_California+ shares the predicate \verb+db:country+ and value \verb+db:Usa+ with the instance \verb+db:San_Francisco+, which shares the predicate \verb+db:locatedIn+ and value \verb+db:California+ with the instance \verb+db:San_Jose_+\verb+California+. Consequently, class-based matching would consider   $M_1$ as more likely to contain the correct matches for the source instances than $M_2$.


Let $S$ be the instances from the source dataset and $M^*$ be the ground truth, containing all and only correct matches in the target dataset. The candidate instances $C$ computed via direct matching might   be not sound and not complete, i.e. there is a candidate in $C$ that is not in $M^*$ and there is a an element in $M^*$ that is not in $C$, when some $s \in S$ and corresponding elements $t \in C$ only have few features that directly match. Class-based matching aims to find those non-sound matches in C (to improve soundness / precision), using only features of the candidate instances $t \in C$. 

Particularly, CBM is built upon the observation that matching is usually performed for a class of source instances. That is, all $s \in S$ belong to a specific class\footnote{Notice that when the input $S$ captures different classes, it can be partitioned into sets of instances representing specific classes \cite{typifier}.}.  Our idea is that if $S$ is a class, i.e., its instances share some features, then correct matches for $s \in S$ should also belong to a class, i.e., instances in $M^*$
%= \{t | t \in M(s) \in M(S) \}$ 
should also share some common features. Then, we aim to compute $M^*$ by finding a subset $M \subseteq C$, whose instances are most similar to each other (compared to other candidate subsets). These instances are considered \emph{class-based matches} because they form a class that matches the class of interest. 

%Consequently, class-based matching tries to find $M^*(S)$ from the candidates $t \in C(s) \in C(S)$.

%More precisely, class-based matching tries to find the set that  uses an approximation of this class because $M^*(S)$ is not given but stands for the actual result to be determined by instance matching. In this way, only information from the target is used for matching, namely the candidates $t \in C(s) \in C(S)$ and the CoI $M^*(S)$ captured as subsets of these candidates. 



%Let $M^*(S)$ be the optimal result that contains only correct matches for every $s \in S$. Class-based matching does not directly compare $s$ with a candidate $t \in C(s)$. Rather, it determines whether $t$ is a match or not based on its class membership, i.e. whether it ``belongs to'' $M^*(S)$, which acts as an instance-based representation of the class of interest (CoI). More precisely, it uses an approximation of this class because $M^*(S)$ is not given but stands for the actual result to be determined by instance matching. In this way, only information from the target is used for matching, namely the candidates $t \in C(s) \in C(S)$ and the CoI $M^*(S)$ captured as subsets of these candidates. 


%For example, consider a set of source instances $S$  of the type \verb+Drug+. Class-based matching exploits the intuition that correct matches in the target must belong to a class that corresponds to \verb+Drug+, e.g. \verb+Medication+. However, our previous study of heterogeneous data shows that class information is often missing or might be too general to be useful \cite{typifier}. That is, the class to which $t$ (or $s$) belongs to might not be explicitly given in the target (source); or the class to which $t$ belongs is specific (e.g. \verb+Medication+) while the one $s$ belong to is much more general (e.g \verb+Product+). Hence, classes might be missing or too different in granularity such that a direct matching at the class-level is not always possible.

%Addressing this, class-based matching employs a latent, \emph{instance-based representation} of the class that is determined during the instance matching process. In this example, we would need a representation for the class \verb+Medication+, for which we employ an approximation of $M^*(S)$ that shall capture all instances of the type \verb+Medication+. Because the matching is non-direct, the class   which $s$ belongs to is not needed but the candidates $t \in C(s) \in C(S)$ and $M^*(S)$. The problem is how to obtain a good approximation that captures all instances of the CoI, which coincides with the actual problem of instance matching, i.e. finding the optimal results $M^*(S)$. 

\subsection{Formal Definition} 
%The class-based matching problem is a particular case of clustering, where we want to separate the set of candidate matches $C = \bigcup_{\forall s\in S: C(s) \in C(S)} C(s)$ in two clusters, namely $M$ and $M^-$, which contain the matches and non-matches of $S$, respectively, and for all $s\in S$, we want to identify $M(s)$, where $M = \bigcup_{\forall s\in S: M(s) \subseteq C(s)} M(s)$ and $M(s) \neq \emptyset$. 

%The class-based matching can be  seem as a particular case of clustering, where the candidates $C$ are separated into the matches $M$ and the non-matches $M^-$.  It resembles a clustering problem because it is an unsupervised approach for separating the data spaces (i.e.\ the candidates $C$) but it is substantially different from clustering techniques (e.g.\ k-means). 

%First, class-based matching focus on finding the set $M$.  It means that   traditional clustering approaches are not sufficient because even if it could separate $C$ in two clusters, we would still have the problem of deciding which one is $M$ between the two clusters. 

%Second,  we assume there is exactly one match for $s \in S$. This adds the additional constraint to the problem, not required in the traditional clustering settings, that  $\forall s\in S: |M(s)| = 1$, assuming $C(s) \neq \emptyset$. Notice that in verifying this constraint equates in computing  $M(s) = C(s) \cap M$, i.e.\ it consists of mapping the correct matches to their corresponding source instances. Therefore, if we can find $M$, we solve the problem of finding $M(S)$, as well. Given $M$ and $C(s) \in C(S)$, this particular problem of computing $M \cap C(s)$ for all $s \in S$ can be solved in $O(ln |M|)$ \cite{Vazirani:2001:AA:500776}. 

For the sake of presentation, we formalize the basic version of our problem first: let assume that individual datasets do not contain duplicates such that for each source instance, the goal is to find exactly one match in the target dataset, i.e. $|M|=|S|$ with $|M(s)| = 1$, for all $s \in S$. Then, the CBM problem can be formulated as follows:

%\begin{definition}[CoI, Class-based Matching]  To find the best representation for the CoI and solution for the class-based matching problem, respectively, consists of computing
\begin{definition}[Class-based Matching (CBM)]  The solution for the class-based matching problem can be computed as 
\begin{equation}
\footnotesize
\begin{aligned}
&  M^* \approx \argmax_{M  \in \mathbf{M}} \frac{\sum_{t \in M} Sim(t, M)}{|M|} \\
& \text{Subject to:} \\
&  \forall s \in S:  |C(s) \cap M| = |M(s)| = 1 
\end{aligned}
\label{eq:opt}
\end{equation}
% &   Sim(t, M)  > \delta  \text{ and } \\

where  $\mathbf{M}$ is the set containing all possible candidate subsets $M$ as elements, 
%The term $Sim(t, M)  > \delta$ captures the heuristic that avoids non-matches, i.e.\ focuses only on finding $M$. 
$Sim(t, M)$ is a function that returns the similarity between an instance $t$ and the subset of candidates $M$. %and  $\delta$ is a similarity threshold

%\DTR{i remove this because it just looks confusing at this place and is not really needed to capture the problem:}
%Note that $\sum_{t \in M} Sim(t, M)$ is equal to: 
%\begin{equation}
%\footnotesize
%\frac{1}{2} \sum_{t_i \in M}\sum_{t_j \in M} sim(t_i, t_j)  -  \sum_{t_i \in M}sim(t_i, t_i)\\
  %\label{eq:quadratic}
%\end{equation}
\end{definition}

%It requires to compute the similarity matrix $|M| \times |M|$ among the candidates instance. which in the worse case (when M = C) is bound by $O(|C|^2)$.
%Note that $Sim(t, M)$ operates over \textit{features} extracted from the instance $t$ and instances in the sets in $M$. This will be detail further, in our proposed solution to this problem.

%. However, we have empirically verified the CBM assumption for the problem of instance matching. The results (F1 measures) presented in the paper show that by solving CBM using an efficient approximated method proposed in Sec. \ref{sec:cbmsolution}, we obtain a set $M$ that approximates a  given ground truth $M^*$ with sufficient accuracy.

As an approximation for $M^* \in \mathbf{M}$, we compute a subset of candidate $M$ containing instances that are similar to itself, i.e. the goal is to maximize $Sim(t, M)$ for all $t \in M$. Compared to all other possible candidate subsets, the solution is the one that is most similar to its instances. Further, in this basic setting, it contains exactly one candidate for every source instance. 

As an example, we have as candidate subsets $M_1=$ \{\verb+db:Belmont_California+, \verb+db:San_Francisco+ and \verb+db:San_Jose_+\verb+California+\}  and $M_2=$ \{\verb+db:Belmont_France+, \verb+db:San_Francisco+  and \verb+db:San_Jose_+\verb+California+\} for the data in our scenario.  Instances in $M_1$
are more similar to $M_1$ than instances in $M_2$ are similar to $M_2$. In other words, the similarity among instances in $M_1$ is higher than the similarity among instances in $M_2$: the candidate \verb+db:Belmont_California+ shares the predicate \verb+db:country+ and value \verb+db:Usa+ with the instance \verb+db:San_Francisco+, which in turn, shares the predicate \verb+db:locatedIn+ and value \verb+db:California+ with \verb+db:San_Jose_+\verb+California+. Thus, CBM considers  $M_1$ as a better approximation of $M^*$ than $M_2$.

We note that typically, instance matching approaches do not provide a theoretically sound and complete solution. As captured above, CBM is also only an approximate solution in that sense. The quality of this approximation taken by our approach is studied in experiments using real-world matching tasks and datasets. 


\textbf{Computational Complexity.} The following theorem captures the complexity of this problem: 

\begin{theorem}
 CBM is an instance of  the \textit{maximum edge-weighted clique problem (MEWCP)} \cite{DBLP:journals/eor/AlidaeeGKW07}, therefore CBM is NP-hard.  
 
 
\end{theorem}

 

\begin{proof} Each candidate $t \in C$ can be mapped to a vertex in an undirected graph $G$. Two vertices $x, y \in C$ are connected if and only if $x \in C(s_i)$ and $y \in C(s_j)$, where $s_i \neq s_j$.  The weight of an edge $\{x, y\}$ is given by $sim(x,y)$.  Any clique in $G$ contains exactly one candidate for each $C(s) \in C(S)$. Then, a solution to the CBM problem is a clique in $G$ with maximum weight.   
\end{proof}


%Notice that the CBM is the formulation of a heuristic that approximates the true set of matches for $S$. In reality, the optimal solution for CBM may differ from the true set of matches $M^*$.  However, we have empirically verified the CBM assumption for the problem of instance matching. The results (F1 measures) presented in the paper show that by solving CBM using an efficient approximated method proposed in Sec. \ref{sec:cbmsolution}, we obtain a set $M$ that approximates a  given ground truth $M^*$ with sufficient accuracy.


%The goal of the paper is to find the matches for $S$ and compare it to state-of-the-art instance matching approaches. 

%Solving this problem requires  enumerating of all possible sets $\mathbf{M}$ and determining the optimal $M^*$. Since this enumeration could be very large, i.e., $|\mathbf{M}|=2^C$, we propose an approximate solution to this that does not require a full enumeration. Also, we show how to obtain a more compact representation of $M^*$. 

\textbf{CBM Variations.} Apart from the introduced basic setting, two other variants exist: \textit{1-to-many class-based matching  (1-to-many CBM)} and \textit{unrestricted class-based matching (UCBM)}. The former assumes $\forall s \in S: |M(s)| > 0$, while the latter, assumes $\forall s \in S: |M(s)| \geq 0$. 1-to-many CBM considers the cases where there is at least one match for each source instance, while UCBM considers the cases where some candidate set $C(s)$ may not contain a match to $s \in S$. To capture the UCBM problem, the constrain should be removed and the term


\begin{equation}
Z = \frac{\frac{\sum_{s\in S} |C(s) \cap M|}{|C(s)|}}{|S|}
\end{equation}

should be added to Eq.\ \ref{eq:opt}. $Z$ is simply an auxiliary term introduced to deal with the general case where $|M(s)|=|C(s) \cap M|$ might be zero. It helps to assign a solution set  $M \in \mathbf{M}$ a higher score, when the majority of its matches $M(s)$ has cardinality higher than zero; hence, it avoids solution sets with many empty matches. 
%This term is not needed in the setting where there always exists at least one candidate for a given source instance $s$. In this case, we can simplify Eq.\ \ref{eq:opt} removing $Z(S)$ and adding the constraint that $|M(s)|>0$.
%instance assume that the matches are one-to-one match or one-to-many, we can constrain $|M(s)|=1$, or $|M(s)|>0$, respectively. In those cases, 

%\begin{equation}
%\begin{aligned}
%& M^*(S) \approx \argmax_{M(S) \in \mathbf{M}} \frac{\sum_{M(s) \in M(S)} \sum_{t \in M(s)} Sim(t, M(S))}{\sum_{M(s) \in M(S)} |M(s)|}   %\\
%& \text{Subject to:} \\
%&Sim(t, M(S))  > \delta, \\
%& |M(s)|>0  \text{ and } \\
%& M(S)=\{M(s) |  s \in S : M(s) \subseteq C(s) \in C(S)\}
%\end{aligned}
%\label{eq:opt2}
%\end{equation}
  
In the next section, we propose an approach to solve CBM and its variants, 1-to-many CBM and UCBM.

\textit{\textbf{Issue: }}\textit{3. p3, col 1, line 15: "share the same value" Some direct matching solutions do not require this. E.g., [3] deals with edit distance and their candidates do not necessarily share a common attribute. Is it a must for the candidates of your approach? How do you generate candidates (cf. the comment below on experiment setup)?}

\textbf{Response. } 
\textcolor{question}{In order to find whether the instances match, existing approaches directly match information from the source against the target. This matching might be based on finding attributes and values (features) the instances have in common or computing the similarities between the instances using various functions and thresholds. However, the number of features they have in common is small and also the similarity is too low to find matches in this \emph{heterogeneous} scenario. }

\textcolor{question}{We added a paragraph to Section 3 to explain how candidates are computed in our approach. }
 

 

\textcolor{question}{Here are our changes to Section 3:}

To generate candidates in this work, we use simple boolean matching: we construct boolean queries using tokens extracted from candidate labels. 
Standard pre-processing is applied to lowercase the tokens and to remove stop words.  
%, where the tokens of the source labels where  keywords in the queries. 
These queries retrieve candidates, which have values that share at least one token with the values of the corresponding source instance. This method is primarily geared towards quickly finding all matches, i.e. high recall, but may produce many incorrect candidates. Higher precision can be achieved using other techniques known in literature~\cite{DBLP:journals/pvldb/ArasuCK09}. 
 
\textit{\textbf{Issue: }}\textit{4. p5, col 2, line 2: The authors claimed that the common features are deemed to be more characteristic for the class in order to decide whether an instance belong to a class or not. The common features may be in D(X), O(X), and T(X), but only A(X) is composed of class-related features. So I can't quite agree with the claim.
}
\textit{\textbf{Issue: }}\textit{ I can't find the implicit class semantics in the equation  Eq. 7. Please discuss, maybe with an example showing what class we can infer with the equation.}

\textbf{Response. } \textcolor{question}{These two issues are addressed below. In Section 2  we added this:}
 
\begin{definition}[Features] Let $G$ be a dataset and $X$ be a set of instances in $G$. The features of $X$ are: 

\begin{itemize}
%\setlength{\itemsep}{-4pt}
	\item $A(X) = \{p | (s, p, o) \in IR(G, X) \land s \in X\}$,
    \item $D(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in L \}$,
	\item $O(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in U \}$, 
	\item $T(X) = \{(p,o) | (s, p, o) \in IR(G, X) \land s \in X \}$,
	\item  $F(X) = A(X) \cup D(X)\cup O(X)\cup T(X)$.
%	\vspace{-4pt}
\end{itemize}
\end{definition}  

Note $A(X)$ is the set of predicates, $D(X)$ the set of literals, $O(X)$ the set of URIs, and $T(X)$ is the set of predicate-object pairs that appear in the representation of $X$. 

Considering  $X=$\{\verb+db:Belmont_California+\}, its features are: $A(x)=$\{\verb+rdfs:label+, \verb+db:country+\}, $D(x)=$ \{ 'Belmont'\}, $O(x)=$\{\verb+db:Usa+\}, and $T(x)=$\{(\verb+rdfs:+ \verb+label+, 'Belmont'), (\verb+db:country+, \verb+db:Usa+)\}. 
Hence, $F(X)=$\{ \verb+rdfs:label+, \verb+db:country+, 'Belmont', \verb+db:Usa+\, (\verb+rdfs:+ \verb+label+, 'Belmont'), (\verb+db:country+, \verb+db:Usa+)\}.

Note that $A(x)$ captures the predicates, which are schema-level features instances of a class typically have in common. However, we do not only use $A(x)$ but the whole union set $F(X)$, which comprises both schema- and data-level features. This is due to our special notion of class and the way we compute it: instances belong to a class when they share some features - no matter schema or data-level features. In this way, both types of features are leveraged for inferring the class instances belong to.  

\textbf{Class}. We define a class as follows:
 

\begin{definition}[Class] Let $G$ be a dataset and $X$ a set of instances in $G$, $X$ is a class if  $\forall x \in X: F(\{x\}) \cap F(X - \{x\}) \neq \emptyset$.
\end{definition}

Intuitively, a class is set of instances, where every instance in this set has at least one feature in common  with any other instance in this set. \par



\textcolor{question}{Then in Section 3, we simplified thae text and discussed how the introduced features can be used to compute the similarity between an instance (a set of instances) and a class (which is also represented as a set of instances): }
 

\textbf{Similarity Function.} Now, we introduce $SetSim(X_1,X_2)$ to compute the similarity between two sets of instances $X_1$ and $X_2$ based on their sets of features $F(X_1)$ and $F(X_2)$:

\begin{equation}
SetSim(X_1,X_2)=FSSim(F(X_1),F(X_2))
\end{equation} 

% (no matter the number of features they are associated with). Our hypothesis is that the commonalities is one order of magnitude more relevant than the differences on defining similarity in our problem setting. In Sec \ref{sec:evaluation}, we verify empirically that $FSSim$ beats the common Jaccard set similarity by a consistent margin.
%Based on Tversky's contrast model \cite{tversky_features_1977}, 

where $FSSim(F(X_1),F(X_2))$ is a function capturing the similarity between $F(X_1)$ and $F(X_2)$. 

Early work such as   Tversky's \cite{tversky1977features} shows that the similarity of a pair of items depends both on their commonalities and differences. This intuition is exploited by similarity functions used for instance matching, which like Jaccard similarity, gives the \emph{same weight} to commonalities and differences. 

%This is suitable for matching instances because commonalities help to infer that two instances might be the same while differences support the conclusion that they are not. 

We depart from the equal-weight strategy to give a \emph{greater emphasis on commonalities}. This is because the goal of class-based matching is to find whether some instances match a class, which by our definition, is the case when they share many features with that class. %We do so because the amount of features that a class of instances have in common  is typically small compared the amount of features that are specific to individual instances. 
For deciding whether an instance belongs to a class or not, the common features are thus, by definition, more crucial.  Not only that, the special treatment of common features also makes sense when considering that common features are more scarce. That is, the number of features shared by all instances in a class is typically much smaller than features that are not. 

%Features that are specific to individual instances are less representative for the class, and also convey more noise, due to their abundance. 
We propose the following function to support this intuition:

\begin{equation}
\footnotesize
FSSim(f_1,f_2) = \left\{ 
  \begin{array}{ll}
     0    \text{    if } |f_1\cap f_2|=0 \\
     |f_1\cap f_2| - (\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|}  ) &  \text{otherwise}
  \end{array} \right. 
\label{eq:setsimsr}
\end{equation}
where $f_1$ and $f_2$ stand for $F(X_1)$ and $F(X_2)$, respectively. $FSSim(f_1,f_2)$ only considers $f_1$ and $f_2$ to be similar when there exist some commonalities (i.e.\ $FSSim(f_1,f_2)$=0 if $|f_1\cap f_2|=0$). The first term $|f_1\cap f_2|$ has a much larger influence, capturing commonalities as the number of overlaps between $f_1$ and $f_2$, which is always larger than 1. The second term $(\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|})$, capturing the differences, is always smaller than 1. In fact, given $f_j$ and $f_k$ that have $n$ and $n-1$ features in common with  $f_i$, respectively, $FSSim$ always returns a higher score for $f_j$. 

For example, assuming $f_1=F($\{\verb+db:Belmont_California+\}$)$, $f_2=F($\{\verb+db:Belmont_France+\}$)$ and $f_3=F(C$(\verb+nyt:5555+)); then, $FSSim(f_1, f_3)  = 3.65$, while $FSSim(f_2, f_3) = 1.5$. The scores reflect  the fact that $f_1$ has 4 features in common with $f_3$, while $f_2$ has only 2.

Notice that $FSSim$ does not capture any class semantics but is  a set similarity function tailored towards the commonalities, for supporting the intuition discussed before. However, the class semantics is inferred as a result of applying this similarity computation as performed in our approach: the instances found by CBM  form a class that corresponds to the class of interest. 
% i.e. $FSSim(f(t_i), f(M^*(S))) > FSSim(f(t_j), f(M^*(S)))$. 


The bias towards commonalities is captured by the following theorem, which does not hold for the Jaccard function (see Appendix A):  

\begin{theorem}
If $|f_i\cap f_j| > |f_i \cap f_k|$ then $FSSim(f_i,f_j) > FSSim(f_i,f_k)$.
\label{theorem:t1}
\end{theorem}

 

%Note the proposed function does not completely neglect the role of differences. In particular, when two instances have the same number of overlaps with a class, their differences to that class decide which ones is a better match. 

%To avoid  to bias the solution towards a specific feature set, 
%We consider all features ($A(\cdot)$, $D(\cdot)$, $O(\cdot)$ and $T(\cdot)$) to be equally important in performing class matching. We observe empirically that on average, this strategy produces better results than the settings where we remove any of these feature sets. 


 


\textit{\textbf{Issue: }}\textit{5. Sec 3.1, Eq. 7: In the "otherwise" case, the two terms are in different units. The first term $|f1 \cap f2|$ is a count, while the second term $\frac{|f1-f2| + |f2-f1|}{2|f1 \cup f2|}$ is a ratio. I doubt it is mathematically reasonable to subtract one from another, though the scoring function was designed to reflect the bias on commonality.}

\textbf{Response. } \textcolor{question}{ Using these two terms has the effect that the count representing commonalities has a much a higher weight then the second term, which captures differences.  In particular, only when two instances $x$ and $y$ have the same number of overlaps with a class $k$ , i.e. $|x \cap k|  = |y \cap k|$, then the differences captured by the second term have an actual impact (help to distinguish which instance is more similar to $k$). Otherwise its impact is relatively very small, compared to the first term. This helps to achieve the bias captured in the theorem above.}

 
\textit{\textbf{Issue: }}\textit{6. Sec 3.1, Eq. 8: What's the meaning of the similarity between a candidate (t) and a set of candidates (C(s'))? I can't quite understand why t becomes a correct match for s, if t has the highest similarity to "other candidates of other instances" rather than s itself. Please discuss the rationale. }

\textbf{Response. } \textcolor{question}{ We hope that by clarifying the intuition behind CBM as previously discussed, the matching between a candidate and a class is more clear. Further, we modified this part in Sec. 5. to clarify this point: }

\textbf{Class-based Matching.} 
Given a set of instances $S$ and the candidate sets $C(S)=\{C(s_1),\dots, C(s_n)\}$, we implement class-based matching by finding the instances $t$ from each candidate set (i.e. $t \in C(s) \in C(S)$) that are similar to the candidate sets $C(S)$. 

Our method starts computing a score of similarity between $t \in C(s)$ and  $C(S)$ itself, i.e.,\ $Sim(\{t\}, C(S))$. In this process $C(S)$ is considered the class of interest but not  the solution set $M$; differently from the formal problem definition where $M$ is both the class of interest and a solution set. In this approach, we depart from $C(S)$ to obtain the solution set $M$ and $M(S)$. 
%In Sec. \ref{sec:evaluation}, we empirically studied the accuracy of this method using a benchmark ground truth on instance matching. 
%, therefore avoiding to enumerate all possible $M \in \mathbf{M}$. 

This solution exploits the intuition that given $t$ and any candidate set $C(s) \in C(S)$, if $F(\{t\})$ does not share any feature with  $F(C(s))$, then $t$ is not similar to any instance in this candidate set. If $t$ is not similar to any candidate set $C(s) \in C(S)$, it cannot form a class with any candidate instance; therefore, based on the class-based matching assumption, it cannot be a correct match for $s$. Contrarily, a candidate $t$ that is more similar to other candidate sets are more likely to be form a class to other candidates, and therefore, can be a correct match.  This heuristic  is implemented as follows.


The computation of $Sim(\{t\}, C(S))$ obtains a score for each individual instance  $t \in C$. Then, the final solution set $M$ is composed of $t \in M(s) \subseteq C(s)$, where for all $t \in M(s)$, $Sim(\{t\}, C(S)) > \delta$.   Below, we define $Sim$ and further we describe how we compute the threshold $\delta$.

 
\begin{equation}
Sim(t,C(S))=\sum_{C(s') \in C(S)^-}\frac{SetSim(\{t\},C(s'))}{|C(s')|}
\label{eq:urds}
\end{equation}

where $t \in C(s)$ and $C(S)^- = C(S) \setminus C(s)$. 

First, note in Eq.\ \ref{eq:urds}, $t \in C(s)$ is not compared with $C(s)$ but the other candidate sets $C(s')$. $C(s)$ in our implementation is computed via direct matching and thus contains candidates very similar to $t$. Just like the other candidate sets $C(s')$, these candidates also help to capture the class of interest. They however, due to their relative high similarity to $t$, have a too strong impact, compared to $C(s')$. Excluding it from the class similarity computation helps to avoid this strong bias towards $C(s)$. Secondly, 
%This to avoid assigning those candidates $t$ only avoids    candidates that are dissimilar to other candidate sets to obtain larger scores when their features are abundant in $C(s)$. 
note the individual score $SetSim(\{t\},$ $C(s'))$ is weighted by the cardinality of $C(s')$ such that a $C(s')$ with high cardinality has a smaller impact on the aggregated similarity measure. We do this to leverage the observation that small sets contain 
% larger pseudo-homonyms sets contain more noise and 
few but more representative instances. They are better representations of the class of interest. 
%consequently we want that resources more similar to singleton pseudo-homonyms set to have a relative higher final score of similarity.


\textit{\textbf{Issue: }}\textit{Issue: Sec 3.3, Algorithm 3, Line 15: Shouldn’t the return of $\delta.max$ be put at the end of the algorithm?}

\textbf{Response. }   \textcolor{question}{ We have checked. Actually, this is the way we have implemented our solution and we think it is correct. Please notice this algorithm contains a recursive call before the return statement. Thus, the method can output the results before reaching the end of the algorithm.}

\textit{\textbf{Issue: }}\textit{8. Sec 4: I have two questions regarding the experiment setup: (1) What candidate (C(S), not C(S)*) selection algorithm was used in your experiment? (2) What direct matching algorithm was used in SERIMI?
}

\textbf{Response. } \textcolor{question}{To clarify these points:}

\textcolor{question}{(1) Please refer to answer given to issue 3. } 

\textcolor{question}{(2) We used  a standard direct matching approach in SERIMI that computes whether $s$ matches $t$ via $Jaccard > \delta$, using F(X) as features. The threshold $\delta$ was determined by the method discussed in the paper.}
 
 
\textit{\textbf{Issue: }} \textit{9. Sec 4.2: The reason why the class-based matching performed poorly on Person11-Person12 (F1 = 0.49/0.47) needs some discussion.}


\textbf{Response. } \textcolor{question}{The text below was included in Sec. 6.2:}

Particularly, S and S+SR performed poorly in Person11-Person12 (49\% and 47\%, respectively) because features of the candidate instances are very similar (e.g. they all contain phone, address and are of the type Person). Due to this, CBM produced similar scores for all candidates, which were not sufficiently distinct to separate the correct matches from the incorrect ones. For this task, DM performed better because the overlap between the source and target instances are sufficiently high to identify the correct matches.

 
\section{Responses to Reviewer 2}
 
\textit{\textbf{Issue:} }\textit{W1.  The authors modeled   class-based matching as an optimization problem (Definition 3), but it was unclear for me that why the authors defined this problem in this way.
}

\textbf{Response. }   \textcolor{question}{We have largely modified the description of the problem and put it under a separate Section 4 to make it clearer. Please refer to our answer given to issue 2 of reviewer 1 for detail and the changes we made in Section 4. }

\textit{\textbf{Issue:} }\textit{W2. Since the optimal solution cannot be achieved for class-based matching, the authors proposed several heuristic approaches to derive an approximate solution, such as replacing M(S) with C(S), and using a greedy algorithm to derive the local optimal solution. But the authors did not investigate that how such approximations will affect the result quality. Please add some theoretical and experimental analysis of this part.
}

\textbf{Response. } \textcolor{question}{The way we presented our problem and solution was confusing. Thus, in the mentioned new Section 4, we made clear that existing works proposed for instance matching are typically more practical. They do not aim to provide a theoretically sound and complete solution. This is because instance matching involves many different dimensions and uncertainties, such as differences in the instance representation at both the data and schema level. An attempt to formalize and compute a perfect solution to this is not really practical or not the goal of our work. }
\textcolor{question}{Thus, just like existing approaches, we approach this problem as follows: we propose intuitively sensible strategies to compute matches, discuss the intuitions, and verify the solution using real-world datasets and matching tasks. }

\textcolor{question}{Analyzing the approximation inherent in the proposed solution would require a much more theoretical approach to instance matching, which was not our goal. Thus, we change the presentation of our solution to not describe it as an approximation but simply as yet another strategy that aims to deliver high quality results for instance matching (that is verified in the experiments). }

\textcolor{question}{We however provide a formal definition of the problem to discuss its theoretical complexity (please refer to our answer given to issue 2 of reviewer 1 for the changes we made in Section 4). }

%%
 %Notice that the CBM is the formulation of a heuristic that approximates the true set of matches for $S$. In reality, the optimal solution for CBM may differ from the true set of matches $M^*$.  However, we have empirically verified the CBM assumption for the problem of instance matching. The results (F1 measures) presented in the paper show that by solving CBM using an efficient approximated method proposed in Sec. 5, we obtain a set $M$ that approximates a  given ground truth $M^*$ with sufficient accuracy. Therefore, the resulting quality of the algorithm was experimentally evaluated and measured as F1.
%
%We have reformulated the problem and outlined its theoretical background.   Sections 4.2 and 4.3 contain  the new formulation that is copied below.


 
\textit{\textbf{Issue: }}\textit{W4. The authors conducted extensive experiments to evaluate the running time and the accuracy of their approaches. In terms of the accuracy, the authors mainly reported the F1 values. Why not show the precision/recall results? I believe that will help readers to fully understand that why the class-based matching can improve the result accuracy (is it due to the improvement of precision, or recall, or both?).}
 
\textbf{Response. }  \textcolor{question}{The reason we did not show precision/recall results is simply due to the space limitation (we can include them in our technical report). However, we discuss in the evaluation part the reasons for the changes in the F1 measure. We make more clear that in the way our solution is set up, i.e. SERIMI combined with class-based matching (CBM) and/or direct-matching (DM), the analyzed improvements on F-measure are mainly due to the effect on precision: SERIMI implements a candidate selection step that has high recall but low precision (compared to precision after applying CBM and/or DM). Especially on the hard task reported in the paper, we show in the analysis that using CBM and DM, SERIMI improved precision because CBM and DM help to eliminate those candidates that are non-matches. Therefore, compared to state-of-the-art instance matching systems, the main reason for SERIMI's competitive F1 is due to the high recall of the candidate selection it implements and the high precision achieved through combining CBM with DM.  }
  
\section{Responses to Reviewer 3}
  
\textit{\textbf{Issue: }}1, \textit{The aim of this paper is to calculate M*(S). As the direct enumerate M*(S) is expensive. The author use a heuristic methods to approximately calculate M(S). The whole method dose not have any analysis on why this heuristic works and dose not included any theoretical analysis on how good this approximation is.  Please give more deep analysis on this part.}
\color{black}

 \textbf{Response. }  \textcolor{question}{  Please refer to our answer given to reviewer 2, issue W2.}
 
\textit{\textbf{Issue: }}\textit{2, The feature set introduced in 3.1 is using all parts of the sets of construct a feature set. The feature set includes predicates, instance, and literals. Those features are treated equal. Should the literal are more important than predicate?}

\textbf{Response. }  \textcolor{question}{We aim to compute whether an instance matches a class using these features. By our definition, it does when it overlaps with the instance-based representation of the class on some features (no matter the type of features). Given this notion of class representation and matching, it is easy to say which type of features are more important. In the experiments, we show that every such type of features is important, i.e. excluding any type consistently worsen the results. Also, we analyze different combinations, i.e. S+SR-P, S+SR-O, S+SR-D and S+SR-T. As future work, we may consider a supervised strategy for learning the weights for different types of features. }

%Importantly, to avoid  biasing the solution towards a specific feature set, we considered  all features ($A(\cdot)$, $D(\cdot)$, $O(\cdot)$ and $T(\cdot)$) to be  equally determinant in our setting. As we observed empirically, on average, this strategy produced  better results than the settings where we removed any  feature set. 
\textcolor{question}{For convenience, we included our revised text that explains the use of these features: }
 
 \textbf{Similarity Function.} Now, we introduce $SetSim(X_1,X_2)$ to compute the similarity between two sets of instances $X_1$ and $X_2$ based on their sets of features $F(X_1)$ and $F(X_2)$:

\begin{equation}
SetSim(X_1,X_2)=FSSim(F(X_1),F(X_2))
\end{equation} 

% (no matter the number of features they are associated with). Our hypothesis is that the commonalities is one order of magnitude more relevant than the differences on defining similarity in our problem setting. In Sec \ref{sec:evaluation}, we verify empirically that $FSSim$ beats the common Jaccard set similarity by a consistent margin.
%Based on Tversky's contrast model \cite{tversky_features_1977}, 

where $FSSim(F(X_1),F(X_2))$ is a function capturing the similarity between $F(X_1)$ and $F(X_2)$. 

Early work such as   Tversky's \cite{tversky1977features} shows that the similarity of a pair of items depends both on their commonalities and differences. This intuition is exploited by similarity functions used for instance matching, which like Jaccard similarity, gives the \emph{same weight} to commonalities and differences. 

%This is suitable for matching instances because commonalities help to infer that two instances might be the same while differences support the conclusion that they are not. 

We depart from the equal-weight strategy to give a \emph{greater emphasis on commonalities}. This is because the goal of class-based matching is to find whether some instances match a class, which by our definition, is the case when they share many features with that class. %We do so because the amount of features that a class of instances have in common  is typically small compared the amount of features that are specific to individual instances. 
For deciding whether an instance belongs to a class or not, the common features are thus, by definition, more crucial.  Not only that, the special treatment of common features also makes sense when considering that common features are more scarce. That is, the number of features shared by all instances in a class is typically much smaller than features that are not. 

%Features that are specific to individual instances are less representative for the class, and also convey more noise, due to their abundance. 
We propose the following function to support this intuition:

\begin{equation}
\footnotesize
FSSim(f_1,f_2) = \left\{ 
  \begin{array}{ll}
     0    \text{    if } |f_1\cap f_2|=0 \\
     |f_1\cap f_2| - (\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|}  ) &  \text{otherwise}
  \end{array} \right. 
\label{eq:setsimsr}
\end{equation}
where $f_1$ and $f_2$ stand for $F(X_1)$ and $F(X_2)$, respectively. $FSSim(f_1,f_2)$ only considers $f_1$ and $f_2$ to be similar when there exist some commonalities (i.e.\ $FSSim(f_1,f_2)$=0 if $|f_1\cap f_2|=0$). The first term $|f_1\cap f_2|$ has a much larger influence, capturing commonalities as the number of overlaps between $f_1$ and $f_2$, which is always larger than 1. The second term $(\frac {|f_1 - f_2| + |f_2 - f_1|}{2 |f_1 \cup f_2|})$, capturing the differences, is always smaller than 1. In fact, given $f_j$ and $f_k$ that have $n$ and $n-1$ features in common with  $f_i$, respectively, $FSSim$ always returns a higher score for $f_j$. 

For example, assuming $f_1=F($\{\verb+db:Belmont_California+\}$)$, $f_2=F($\{\verb+db:Belmont_France+\}$)$ and $f_3=F(C$(\verb+nyt:5555+)); then, $FSSim(f_1, f_3)  = 3.65$, while $FSSim(f_2, f_3) = 1.5$. The scores reflect  the fact that $f_1$ has 4 features in common with $f_3$, while $f_2$ has only 2.

%Notice that $FSSim$ does not capture any class semantics but is simply a set similarity function that is used to compute the membership of an instance to a class. 
Notice that $FSSim$ does not capture any class semantics but is  a set similarity function tailored towards the commonalities, for supporting the intuition discussed before. However, the class semantics is inferred as a result of applying this similarity computation as performed in our approach: the instances found by CBM  form a class that corresponds to the class of interest. 
% i.e. $FSSim(f(t_i), f(M^*(S))) > FSSim(f(t_j), f(M^*(S)))$. 


The bias towards commonalities is captured by the following theorem, which does not hold for the Jaccard function (see Appendix A):  

\begin{theorem}
If $|f_i\cap f_j| > |f_i \cap f_k|$ then $FSSim(f_i,f_j) > FSSim(f_i,f_k)$.
\label{theorem:t1}
\end{theorem}


\textit{\textbf{Issue: }}\textit{3, The similarity function FSSim is biased. The difference of two sets only amount for 1 unit of score, why this biased similarity function works? Is this similarity function has any knowledge to support.}

\textbf{Response. } \textcolor{question}{ Please refer to our answer given to issues 4 and 5 of reviewer 1.}

\textit{\textbf{Issue:} }\textit{4, The score of one instance is calculated by intersect the feature set with the complementary candidate sets’ features. What if the score is high with the complementary set but it is totally dissimilar with the source instance it comes from.}

\textbf{Response. }  \textcolor{question}{For our class-based matching approach, we assume a direct matching solution (a black-box matcher) that provides the candidates. For these candidates to be the output, they have to be similar to the source instance (direct matching typically keeps those as solutions, which have similarity higher than a threshold). }
%Notice, we do not compare $t$ with $s$ directly  because we are assuming they do not share any features in common. This is the assumption of class-based matching. 
 \textcolor{question}{In the experiments, we use SERIMI to compute these candidates and show that class-based matching helps to filter those computed candidates that are not correct. With this approach, SERIMI or any other matching solution for computing candidates can be used.} %As future work, we may also consider the candidates' similarity scores that might be available in the output of the underlying matching solution. These scores can be (weighted) and combined with the (weighted) class-based matching scores. }
%We study this case in the evaluations (S+SR+DM).

\textit{\textbf{Issue:} }\textit{The matching could result in false positive. Is there any assumption on the source instances that ensure the false positive could be minimized? Another words, what kind of source instance sets this method is preferred?}

\textbf{Response. }\textcolor{question}{ Regarding the source instances, class-based matching should be applied to match a class of source instances. Theoretically, the method should perform better when the source instances are  specific to a single notion of class  (e.g. people), rather than when the source instances are from multiple classes (e.g.\ people and locations). The reason for that is that candidate instances for mixed source instances will potentially carry more noise. Consequently, the accuracy of the method will be impacted by that.  Mixed source instances should be first split into sets of specific classes before applying the method, which can be automatically performed using TYPifier, our recent work~\cite{typifier}. }
%Overall, class-based matching is recommended for cases where the source instances belong to a well-defined class (e.g.\ people, politicians, countries, locations, cities, etc).   

\textit{\textbf{Issue:} } \textit{The author should give more introductions on the measurements. For instance, not every ready know what F1 is without reading the related work.
} 

\textbf{Response. } \textcolor{question}{Text included in Sec. 6:}

\textbf{Evaluation Metrics.}
We used the standard F1  to measure the result accuracy (also employed by OAEI). $F1 = 2 \times \frac{Recall \times Precision}{Recall + Precision}$ is  the harmonic mean between precision (proportion of correct matches among matches found) and recall (proportion of  matches found among all actual matches). To compute F1, the provided reference mappings were used as the ground truth. 

  
\bibliographystyle{IEEEtran}
\bibliography{journal}

 
\end{document}



