\section{Related Work}
%The problem of Web-scale instance matching has only been studied recently, while most of the work so far focused on the single domain context. 

%Generally, instance matching requires the learning of a \textit{matching scheme} that is later applied to select correct matches. The matching scheme is a combination of features (a representation of an instance), similarity measures and their thresholds. 
Instance matching across datasets involves the use of similarity functions, thresholds and comparable attributes. They are captured by a matching scheme. While the majority of approaches use a flat representation of instances based on attribute values, other features might be applied. We will discuss existing approaches along these dimensions of features, similarity functions and matching schemes. 
%Two instances match when their features compared using (a combination of) measures exceeds a similarity threshold. For We discuss existing works along the dimensions of features, \emph{similarity measures} and \emph{matching techniques}.... %Also, we will present the main directions towards \emph{Web-scale integration} and position our contributions along this line. 

%To determine a score of similarity using features, instance matching techniques directly compare features of a source instances with features of a target instances. This direct comparison is part of a broader process, called \textit{direct matching}. Although this direct comparison between source and target instances is necessary to establish matches, we will show that is not sufficient. To overcome its limitation, we propose an novel matching process that only depends on the target instances, called \textit{class-based matching}. 



\textbf{Matching Features.}
%Instances are similar, thus, are considered candidate matches if their \emph{features} are similar \cite{fellegi_theory_1969}. 
%%A great variety of features have been employed in order to solve the instance matching problem, while instances in different settings may be represented as different types of database records or RDF resources. 
%Traditionally, 
Instance features are derived from flat attributes, structure information (e.g. relations between RDF resources)~\cite{melnik_similarity_2002,spaccapietra_survey_2005, bernstein_discovering_2009} or semantic information extracted from ontologies. 
%\emph{Collective matching} represents the other kind of approach \cite{spaccapietra_survey_2005}. It exploits the intuition that two instances are similar if their neighbours are similar. Similarity flooding \cite{melnik_similarity_2002} is a generic graph-matching algorithm that implements this intuition. Although this model is interesting, to apply it over heterogeneous RDF data is not straightforward and requires a investigation apart. 
%For instance, ObjectCoref \cite{hu_bootstrapping_2011} considers resources as matches if their discriminative RDF attributes are similar. 
ObjectCoref \cite{hu_bootstrapping_2011} for instance, 
%the discriminative power of attributes is inferred from semantic constraints (expressed using the Web Ontology Language, OWL), expressed in 
exploits the semantics of OWL properties such as \textit{owl:Inverse\\FunctionalProperty} and \textit{owl:FunctionalProperty}.
Also, the combination of instance-level and schema-level features have been explored by PARIS~\cite{DBLP:journals/pvldb/SuchanekAS11}, which jointly solve the problem of instance and schema matching. 

SERIMI targets the heterogeneous scenario, where no structure, semantic or schema information is available in the worst case. It is based on a simple \emph{flat representation}, where instances are captured as a set of attribute values. 
%only rely attribute values No schema or semantic information is exploited by SERIMI, which make it a schema agnostic approach.
This representation is employed for single instances as well as for class of instances, which are needed for class-based matching. 

\textbf{Similarity Functions.}
The choice of similarity functions depends on the nature of the features. 
%The direct matching paradigm using flat features typically relies on 
For strings, 
%Exist several measures available. 
character-based (e.g. Jaro, Q-grams),  
%work well for detecting typographical errors. 
token-based (e.g. SoftTF-IDF, Jaccard) 
%work well when features have many words and the arrangement of words cannot be captured by character-based metrics (e.g., ``Michael Jackson'' vs. ``Jackson, Michael'').
and document-based functions (e.g. cosine similarity) were used. 
In addition to using syntactic information, 
%All these
% were designed to capture the nuances of numeric features. These 
%string-based measures are not able to detect similarity between synonyms, nicknames, abbreviations and acronyms (e.g, ``Big Apple'' as a nickname for ``New York''). To address this, there are also 
special similarity functions have also been proposed to exploit different kinds of (lexical) semantic relatedness \cite{budanitsky_evaluating_2006, han_structural_2010}. 
% which leverage semantic information from general knowledge sources (e.g., Wikipedia) or lexical sources (e.g., WordNet).  
%Although there are many measures,
%For other types of features, there are also %are employed when the number of tokens to be compared is large. 
%phonetic- (e.g., Soundex) 
%can deal with features that are phonetically similar but are not at the level of character or token. 
%and numeric-based measures. 

%There is no single one that applies in all cases \cite{cohen_comparison_2003}. 
%%Different features have different characteristics that demand different measures. 
%Some authors propose that the best way to approach this problem is by 
%%learning the right metrics for the given features, and 
%combining different measures \cite{branting_comparative_2003}.  In this paper, we combine a character and token based approach to compute instance similarity.

Also along this dimension, we pursued a simple approach where only tokens are employed. However, for our new problem of class-based matching, which involves comparing sets of instances, we propose a \emph{set-based similarity function} that take the token overlaps between sets into account.

\textbf{Matching Schemes.}
%An additional step in the instance matching process is the learning of 
With approaches relying on a flat representation of instances, i.e., attribute values, the employed schemes contain the similarity functions, thresholds and comparable attributes. Comparable attributes are either computed via automatic schema matching or assumed to be manually defined by experts~\cite{DBLP:conf/semweb/NiuRZW11}. 
%For finding comparable attributes, existing approaches either use automatic schema matching or
%% to determined pair of comparable attributes in the data, so that their values can be used as features during the comparison. 
%Other approaches, such as Song et. al \cite{Song:2011:AGD:2063016.2063058} and ObjetCoref learn the comparable attributes from the data based on the discriminative power of the data attributes. Zhishi.links \cite{DBLP:conf/semweb/NiuRZW11}  Zhishi.links \cite{DBLP:conf/semweb/NiuRZW11} 
%assume that the comparable attributes are engineered by human experts. 
Then, techniques with different 
%algorithm used and the 
degrees of supervision are employed for learning the scheme. 
% For learning the similarity functions and thresholds, . 
% (i.e. supervised, semi-supervised, unsupervised). 
Knofuss+GA\cite {DBLP:conf/esws/NikolovdM12} is an unsupervised approach that employs a genetic algorithm for learning. 
%for learn the best combination of similarity functions and threshold for a given set of comparable attributes. 
%They model those comparable attributes in a fitness function to establish similarity parameters that maximise the quality of the resulting mapping according to the considered model. 
%Also requiring a given set of comparable attributes, 
SIFI \cite{DBLP:journals/pvldb/WangLYF11} and OPTrees \cite{DBLP:conf/vldb/ChaudhuriCGK07} represent supervised approaches that learn the schemes from a given set of examples. 
%similarity measures and thresholds, given . It is based an observation that different similarity measures and thresholds have redundancy, so that they prune inappropriate similarity measures to reduce the search space. 
%OPTrees \cite{DBLP:conf/vldb/ChaudhuriCGK07} is another supervised approach that formulates the probthe matching schemes by solving an optimization problem.  
%PARIS \cite{DBLP:journals/pvldb/SuchanekAS11} is a supervised  approach that measure the degree of a match based on probability estimates. It solves both the problem of schema matching and instance matching jointly. SIFI, OPTree and Paris, use specific schema-information during the instance matching, which it is not exploited by SERIMI. 
Others approaches such as Zhishi.links \cite{DBLP:conf/semweb/NiuRZW11}, RIMON \cite{DBLP:conf/semweb/WangZHZLQT10} and Song et. al \cite{Song:2011:AGD:2063016.2063058} assume matching schemes that for the most part, were manually engineered, i.e., the similarity functions and thresholds were defined manually. They focus on the problem of learning the best comparable attributes. 
% from the given input.  
%In particular, they are chosen based on their discriminative power and coverage \cite{Song:2011:AGD:2063016.2063058}, two measures we employed for the analysis of task complexity in the experiments. 
%Zhishi.links \cite{DBLP:conf/semweb/NiuRZW11} 

%The direct matching component implemented for SERIMI builds upon existing works mentioned above: comparable attributes are determined based on coverage and discriminative power. The function was fixed to be Jaccard similarity. For selecting the threshold we use our unsupervised threshold selection method proposed for class-based matching. 



The above solutions focus on direct matching. As oppose to that, class-based matching does not rely on a complex scheme. It uses a special similarity function we specifically design for this matching task. The problem of finding the threshold is cast as the one of detecting outliers, for which we propose an unsupervised solution. 
%RIMON \cite{DBLP:conf/semweb/WangZHZLQT10} is an unsupervised approach that uses a manually engineered matching scheme, document similarity metric (cosine similarity).

Overall, our solution can be characterized as an unsupervised, simple, yet effective solution, which employs a novel class-oriented similarity function, matching technique and threshold selection method to exploit the space of class-related features never studied before in the literature. 

There are other systems  in the literature that propose to tackle the same problem. For instance, Linda \cite{DBLP:conf/cikm/BohmMNW12} is an entity matching system for web scale that was evaluated over a small subset of the datasets that we consider here. The   reported results  have a lower accuracy compared to the systems used on our evaluation. 
%\subsection{Candidate Selection}
%
%Once a matching schemes is learned, it is applied to select matches.  Different strategies have been used to address the efficiency in this process. \emph{Data blocking techniques} \cite{hernandez_merge/purge_1995} aims to make it more efficient by reducing the number of unnecessary comparisons between records. Based on a feature that is distinctive 
%%and can be processed efficiently
%(also called Blocking Key Value, BKV), instances are partitioned into blocks such that potentially similar instances (i.e. candidate results to be further refined) are placed in the same block~\cite{hernandez_merge/purge_1995,mccallum_efficient_2000}. 
%Each block can be considered as a set of candidate matches, which subsequently are disambiguated by applying a matching scheme.
%%Examples of blocking techniques include the sorted neighborhood approach  and canopy clustering \cite{mccallum_efficient_2000}. 
%Recently, an unsupervised blocking technique has been explicitly proposed for the heterogeneous Web setting, where the BKV is simply the set of all tokens that can be extracted from the instance data~\cite{papadakis_efficient_2011}. Silks~\cite{DBLP:conf/webdb/IseleJB11} is another solution for this setting, which however, requires a manual identification of the BKV. We used an elementary candidate selection strategy in SERIMI, discussed further.
%%This approach differs from the single-domain techniques in that instead of using specific attributes, it simply uses tokens from all attributes. Thus, it can deal with multiple domains and schemas because instances to be compared do not have to share common attributes (i.e. their schemas are not assumed to be same or pre-aligned). 
%
%Table \ref{table:relatedwork} summarizes the characteristic of the main related work mentioned above.
%
%\begin{center}
%\begin{table*}[t]
%\scriptsize
%\centering
%\caption{Systems Characteristics} 
%\begin{tabular}{ | c | c  | c | c | c | c | } 
%\hline
%System & Matching & Candidate Selection & Learn Attributes & Learn Measures and Thresholds & Supervision  \\
%\hline
%SERIMI &  Class-Based Matching & Yes & Yes & Thresholds & Unsupervised \\
%ObjectCoref  & Direct and Semantic Matching & Yes & Yes & Manual & Unsupervised \\
%RiMOM  & Direct and Collective Matching & No& Yes & Manual & Unsupervised \\
%Zhishi.links  & Direct Matching & No & Manual & Manual & Unsupervised \\
%Song et. al & Direct Matching & Yes  & Yes & Manual & Unsupervised \\
%Knofuss+GA  & Direct Matching & Yes & No & Yes & Unsupervised \\
%Paris  &   Direct Matching & No & No & Yes & Supervised \\
%SIFI  &  Direct Matching & No & No & Yes & Supervised \\
%OPTree  &  Direct Matching & No & No & Yes & Supervised \\
%
%\hline 
%\end{tabular}  
%\label{table:relatedwork}
%\end{table*}  
%\end{center} 

%There are two major kinds of approaches that target the effectiveness of matching. Usually, they are employed after blocking for the disambiguation of candidate matches. There are \emph{learning-based approaches} that can be further distinguished in terms of training data and degree of supervision, respectively (i.e. supervised, semi-supervised, unsupervised \cite{bernstein_discovering_2009,Song:2011:AGD:2063016.2063058,Niu:2011:ZWC:2063076.2063091}). 
%ObjectCoref is a supervised approach that self-learns the discriminative power of RDF properties. Then, matches are computed based on comparing values of a few discriminative properties. RIMON is an unsupervised approach that firstly applies blocking to produce a set of candidate resources and then, uses a document-based similarity metric (cosine similarity) for disambiguating candidate resources. Knofuss+GA\cite {DBLP:conf/esws/NikolovdM12} proposed an a unsupervised approach using apply a matching scheme . PARIS \cite{DBLP:journals/pvldb/SuchanekAS11} is a supervised  approach that uses measure degrees of matchings based on probability estimates. It solves both the problem of schema matching and instance matching jointly. SIFI is also supervised approach which uses a set a pruning techniques to reduce the leaning time.


%As features, it models
%%goes beyond flat attributes to 
%an instance as a vector of terms that are extracted from the structure formed by the instance and its neighbors. 

%This algorithm was adopted to the LDW setting to deal with ontology matching \cite{wang_structure-based_2010}. 

%\subsection{Instance matching on the Web}
%Recently, different directions towards \emph{Web-scale matching} have been pursued. One prominent concept is pay-as-you-go data integration \cite{das_sarma_bootstrapping_2008}, which recognizes that in large-scale scenarios, it is not affordable to perform integration completely upfront but rather, it shall be considered as a continuous process. 
% that involves users, where integration results are incrementally obtained and refined as the system evolves. 
%In particular, Google researchers have studied keyword search-based data integration, where matching tasks are carried out during user search activities \cite{DBLP:conf/sigmod/TalukdarIP10}. Besides these general concepts, a few technical solutions have also been proposed to deal with schema and ontology matching \cite{budanitsky_evaluating_2006} in the LDW context. However, 
%Besides the works
%% the blocking technique \cite{papadakis_efficient_2011} and the two preliminary works, ObjectCoref and RIMON, 
%mentioned above, there exists no effective solution for the instance-matching problem in the multiple domains and schemas. 

%We target this problem setting, providing an unsupervised approach that can be used in combination with existing blocking techniques \cite{papadakis_efficient_2011} to disambiguate candidate matches. 
%%That is, we focus on the effectiveness of matching, aiming at refining candidate results previously obtained through a simple but efficient blocking technique. 
%As opposed to previous works \dtr{cite}, which are based on \emph{direct matching} of instances between the source and the target datasets based on some common attributes, the disambiguation involves only instances of the same target dataset. 
%%To the best of our knowledge, 
%%this is the first approach that 
%This enables the effective matching of instances across domains and possibly non-overlapping schemas. 