% The Computer Society usually requires 10pt for submissions.
%
\documentclass[10pt,journal,compsoc]{IEEEtran}
%
%\hyphenation{op-tical net-works semi-conduc-tor}

\usepackage{amsmath}
\usepackage{color}
\usepackage[]{graphicx}
\usepackage{balance} 
\usepackage{setspace} 
\usepackage{spverbatim} 
\usepackage{algorithm}
\usepackage{algorithmic} 
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newtheorem{myproof}{Proof}
\newtheorem{lemma}{Lemma}
\newcommand{\argmax}{\operatornamewithlimits{argmax}}
\definecolor{question}{RGB}{25,25,112}
%\definecolor{highlight}
\newcommand{\todo}[1]{\textcolor{red}{@TODO: #1}}
\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{SERIMI: Class-based Matching for Instance Matching Across Heterogeneous Datasets}

\author{Samur Araujo, Duc Thanh Tran, Arjen P. de Vries and Daniel Schwabe 
                 
% note need leading \protect in front of \\ to get a newline within \thanks as
% \\ is fragile and will error, could use \hfil\break instead.
 
 }

 
\IEEEcompsoctitleabstractindextext{%
 

}


% make the title area
\maketitle

\section{Responses - 2nd Revision}
First of all, we thank all reviewers for their valuable comments. As far as we can tell, the points raised may not require major changes in the text.



 
\section{Responses to Reviewer 1}
 
\textit{\textbf{Issue 1: }} \textit{1. The authors define the class as a set of instances such that each instance shares at least one feature in common with any other instance in this set. However, the source dataset is assumed to be a specific class later in this paper (Section 4), hence rendering the definition quite confusing. If you require each instance shares at least one feature in common with any other in the source dataset, the proposed technique would be very limited for practical use, because most real datasets do not satisfy this condition, even if they have been partitioned via Typifier [10]. In addition, in the source dataset in Table 1, I don't see anything in common between nyt:2223 and nyt:5962, but they are assumed to be in the same class (see the example in the 3rd paragraph of page 2). Does it contradict your definition of class? Please clarify.}

\textbf{Response. } \textcolor{question}{ We understand that the concept of class as used in SERIMI might be less intuitive for readers. The first point to notice is that the formal notion of class, as defined in the paper, is only applied to the Target dataset. This definition is purposely made “loose”, because we want to be able to handle heterogeneous, incorrect or incomplete data sets against which we may wish to match a known dataset
As far as the source dataset is concerned, all we require is that the user identifies members of a “class”, according to whatever criteria they have in mind, including some that may not be present in the data. In other words, all SERIMI requires are the sets of source elements that should be considered as being in the same class, as a given. In the example in Table I, the source dataset could be understood as being “Locations” or “Cities”. Notice that there is no requirement or assumption for this to be represented directly or even indirectly in the source dataset. So the formal reasoning about class membership applied for the source dataset is in fact immaterial.
 }

 

\textit{\textbf{Issue: }} \textit{2. For FSSim, the authors claimed in the response letter that the first term has more weight. I disagree. The first term dominates the equation. The second term, difference, only makes sense when the overlaps are equal. I am also wondering why you have to put the difference in this equation. Since commonalities dominate, why not simply use overlap similarity; i.e., $FSSim(f1, f2) = |f1 \cap f2|$? Note subtracting a radio from a count is mathematically not sound.}

\textbf{Response. }  \textcolor{question}{ As observed by the reviewer, differences are important when overlaps are equal but also when the candidates have a large number of different features.
As stated in the text of the paper, we do want commonalities to dominate. Nevertheless, when comparing elements where the commonalities do not provide discriminatory power, we feel the differences between them provide additional discrimination criteria. However, the weight of the difference should never dominate the weight of the commonalities.
Under these assumptions, we simply adopted a well-established formula [1] which has been widely used in the literature, giving appropriate weight to the difference. 
 }
 
 [1] A. Tversky, “Features of similarity,” Psychological Review, vol. 84, no. 4, pp. 327–352, July 1977.
 
\section{Responses to Reviewer 3}
 
\textit{\textbf{Issue 1: }} \textit{ This paper proposes a class-based instance matching algorithm. In Section 4, the authors propose the CBM problem and prove it to be NP-hard and then in Section 5, the authors propose a solution. It is not clear what is the goal of Section 5, improve the efficiency or quality? It seems the authors want to improve the efficiency. However in the experimental study, the paper focus on quality. So the authors should comment on this.}

\textbf{Response. }  \textcolor{question}{ This paper proposes an approach for instance matching. As with other published approaches in the literature on this subject, the goal is to produce an accurate as possible match. Therefore, the quality of the approach should be measured by the quality –i.e.  accuracy - of the resulting computed match.
The goal of Section 5 is precisely to establish this quality, comparing CBM to state-of-the-art alternative approaches, in which CBM performs favorably in the majority of cases. In addition, a secondary quality criterion is generality - CBM, contrary to other proposals, does not assume anything about the datasets, and is therefore more general.
We also discuss under which conditions CBM does not perform so well, and also show when a combination of approaches can produce better (more precise) results.
It is true that CBM requires more comparisons than DM, and could have performance problems to be applied in practice. Nevertheless, we have implemented it in a way to allow it to perform satisfactorily, in the sense that we were able to run all the evaluation tests. To give a more complete characterization of SERIMI, we have also included indicative execution times, allowing comparison with the simpler DM implementation using the same execution environment.
In the same spirit, we have also tried to explain briefly how the implementation can achieve an acceptable performance. This is the goal of Section 5.2.
We would like to stress that efficiency and optimality are not claims we make about CBM with respect to the instance-matching problem, nor have they been claimed by any of the comparable published approaches. Therefore, efficiency and optimality are a separate, different research topics not addressed in the current reported research. In fact, these goals would, per force, require comparable precision as a premise. 
We recognize that the inclusion of Section 5.2 may have misled the reviewer with respect to our goals, and would consider removing it entirely from the paper, since it is not directly related to the goals (precision and generality) of the proposed approach.
 }
 
\textit{\textbf{Issue 2: }} \textit{ 
  In addition, since the CBM is an NP-hard problem and there are many heuristic algorithms and the authors should compare with the existing heuristic  algorithms. The following paper address a similar problem and the authors should compare with it.\\
Jiannan Wang, Guoliang Li, Jianhua Feng: Fast-Join: An Efficient Method for Fuzzy Token Matching based String Similarity Join. ICDE 2011:458-469}

\textbf{Response. }  \textcolor{question}{ As remarked above, we are not concerned with efficiency per se, and we are not attempting to propose an efficient or optimal approach for the problem – we are focused on quality. Section 5.2 merely describes an implementation technique to allow running the evaluation tests in an acceptable time. It turns out that this technique involves blocking, which is also used in other approaches, such as the one reported in the suggested reference. The work in that paper is addressing efficiency (an we are not), and assumes that schemas must overlap. The heterogeneous scenario that we consider does not assume that schemas must overlap, which is a substantial difference from their scenario. Therefore, we feel this work is not really related to ours. For these reasons, we feel the comparable approaches are covered in the paper.  }

\textit{\textbf{Issue 3: }} \textit{ In the experiments, it is very hard to compare different methods in Tables 4 and 5 as there are lot of numbers and it is hard to identify the best method.  }

\textbf{Response. }  \textcolor{question}{ We acknowledge that Table 4 could be improved by highlighting the best result for each approach; we have done this in the paper. In addition we refer to the text already present in the paper, in which we try to summarize our conclusions about the best configuration for SERIMI, namely:  "Concluding, the highest accuracy is achieved by combining class-based matching with direct matching. Further, candidate set reduction helps to improve time efficiency. In the following experiments, we will use S+SR+DM, in combination with the top-1 approach where there is an one-to-one mapping, or the threshold approach otherwise." 
}

\textit{\textbf{Issue 3: }} \textit{  In addition, the paper should study the efficiency and scalability issues.}

\textbf{Response. }  \textcolor{question}{While important topics, efficiency and scalability are separate problems that are out of the scope of this paper.
}


  
\bibliographystyle{IEEEtran}
\bibliography{journal}

 
\end{document}


