
\section{Class-based Disambiguation for Instance Matching} 
In this section, we present the whole process of instance matching performed by SERIMI, and then, focus on the class-based disambiguation step.  
%how to use the collective inference disambiguation approach to solve the instance matching problem in the large, heterogeneous and noisy environment of Web data.

\subsection{The Instance Matching Process} 
The overall matching process is depicted in Fig.2. Starting from a set of instances $S$ of the class $C$ in the source dataset, we firstly perform candidate selection to obtain the pseudo-homonym set $PH(s)$ in the target dataset for each $s \in S$. Existing blocking techniques~\cite{hernandez_merge/purge_1995,mccallum_efficient_2000,papadakis_efficient_2011} can be used for this. We adopt an entropy-based approach to find blocking keys and then use key values (i.e. tokens extracted from an attribute such as name, title etc.) to determine all candidates in the target dataset that match source instances. 
%In particular, we select attributes with entropy higher than a threshold. Given the attribute $a$ and its literal values $O(a) = \{o | (s,a,o) \in G\}$, let $p$ be the probability mass function of $a$, then the entropy of $a$ is $H(a)=-\sum_{o \in O(a)} p(o)log_2p(o)$.
Table 1 shows example results obtained for the \verb+Diseasome+ diseases 85, 379 and 502 using \verb+Diseasome+ as source and \verb+DBpedia+ as the target dataset. Then, more effective matching is achieved through the second step of disambiguation, where the candidates in $PH(s)$ are refined. 
% using the function CRDS that we will introduce next.

\begin{figure} 
\centering
\includegraphics[width=0.57\textwidth]{fig2.png}
\caption{The process of finding and disambiguating the pseudo-homonym sets given a set of source instances.} 
\end{figure} 
 

\begin{table} 
\caption{Pseudo-homonym sets for Anemia, Erythrocytosis and Hemophilia.} 
\scriptsize\tt
\begin{tabular}{llll} 
\hline\noalign{\smallskip} 
diseasome:85 & diseasome:379 &	diseasome:502\\ 
Token: Anemia	 & Token: Erythrocytosis & Token: Hemophilia\\ 
\noalign{\smallskip} 
\hline 
\noalign{\smallskip} 
db:Anemia & db:Erythrocytosis & db:Hemophilia \\
db:Anemia\_fern & db:Familial\_erythroc. & db:Hemophilia\_A \\
db:Aplastic\_anemia  &  & db:Porphyric\_Hemo.  \\
  
 \hline 
\end{tabular} 
\end{table} 

\subsection{Class-based Disambiguation} 
\textbf{Similarity Measure.} We firstly decompose the instance representation into different parts, then we elaborate on the similarity used for $\sim_S$, i.e. the measure used to disambiguate instances based on the class of interest. 

%Finally,  \textbf{Similarity measure:} To explain how we disambiguate the pseudo-homonym sets, we need to introduce our notion of similarity between instances. Firstly, we define the flat features that are used as items of measurement in the similarity measure.

\begin{definition}Features- Given a graph $G$ and a set of instances $X$ in $G$, we employ the following sets of features:

\begin{itemize}
\vspace{-4pt}
\setlength{\itemsep}{-4pt}
	\item $P(X) = \{p | (s, p, o) \in IR(G, X) \land s \in X\}$,
  \item $D(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in L \}$,
	\item $O(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in U \}$, 
	\item $T(X) = \{(p,o) | (s, p, o) \in IR(G, X) \land s \in X \}$.
	\vspace{-4pt}
\end{itemize}
\end{definition}  

Intuitively, $P(X)$ is the set of predicates that appear in the representation of $X$, $D(X)$  the set of literals, $O(X)$ the set of URIs, and $T(X)$ is the set of predicate-object pairs.
For implementing $\sim_S$, we need to capture the similarity between sets of instances $A$ and $B$, which is realized by the function we call $RDS$ as follows:
\begin{multline}
 RDS(A,B)=SetSim(P(A),P(B)) + SetSim(D(A),D(B)) + \\
  SetSim(O(A),O(B)) + SetSim(T(A),T(B)) 
\end{multline} 

We want $SetSim$ to reflect the intuition that two sets $A$ and $B$ that have $n$ features in common should be more similar than two sets with $n-1$ features in common, no matter the number of features both sets may have. 
%Based on Tversky's contrast model \cite{tversky_features_1977}, 
We define $SetSim$\footnote{In our experiments, this $SetSim$ measure beats the common Jaccard and Dice index by a small but consistent margin.}  as follows: 
\begin{equation}
SetSim(A,B)=|A\cap B| - \left(\frac {|A - B| + |B - A|}{2 |A \cup B|}  \right)
\end{equation}

%The RDS(A,B) function (Equation 2) gives a score of similarity between two (set of) instances and is used in the disambiguation process of the pseudo-homonyms sets, which is described next.

\textbf{Class-based Disambiguation.} In the disambiguation process, given a set of pseudo-homonyms sets $PH(S)$, we reduce our problem to the one of finding instances $t$ from each pseudo-homonym set, i.e. $t \in PH(s) \in PH(S)$, which is more similar to all the other sets of pseudo-homonyms sets $PH(S)^- = PH(S) \setminus PH(s)$. The $RDS$ function is used to compute this similarity, i.e. $RDS(\{t\}, PH(s')), PH(s') \in PH(S)^-$. This process is depicted in Fig. 3a for the instance $h_{11}$, where it is compared to the pseudo-homonyms sets $H2$ and $H3$. After the similarity is computed for all instances $t$ in this fashion, the instance in each pseudo-homonyms set with highest score is selected (Fig. 3b). Instead of using only the top-$1$ in the Fig. 3b, SERIMI may return top-$k$ matches.  

\begin{figure} 
\vspace{-10pt}
\centering
\includegraphics[width=0.38\textwidth]{fig3.png}
\caption{A) Computing the similarity score for $h_{11}$. B) The similarity score for all resources.} 
 \vspace{-10pt}
\end{figure} 

The comparisons between $t$ and the other pseudo-homonyms sets $PH(S)^-$ is captured by Equation 3 where the individual score $RDS(\{t\}, PH(s'))$ is weighted by the cardinality of $PH(s)$, such that a $PH(s)$ with high cardinality has a smaller impact on the final aggregated measure. We do this to capture the intuition that
% larger pseudo-homonyms sets contain more noise and 
small sets containing only a few representative instances are better representation of the class of interest. 
%consequently we want that resources more similar to singleton pseudo-homonyms set to have a relative higher final score of similarity.
\begin{equation}
URDS(t,PH(S)^-)=\sum_{PH(s') \in PH(S)^-}\frac{RDS(\{t\},PH(s'))}{|PH(s')|}
\end{equation}

We normalize the results of Equation 3 by the maximum score among all instances as 
\begin{equation}
CRDS(t,PH(s), PH(S)^-)= \frac{URDS(t,PH(S)^-)}{MaxScore(PH(s),PH(S)^-)} 
\end{equation}
where
\begin{multline}
MaxScore(PH(s),PH(S)^-) = \\MAX\{URDS (t', PH(S)^-) | t' \in PH(s) \in PH(S)\}
\end{multline}
This yields a score in the range $[0,1]$. 
%, where 0 means not similar and 1 means equal. 
Using this function, an instance $t$ is considered as a solution if $CRDS(t,PH(s), PH(S)^-)$ is higher than a defined threshold $\delta$ or its rank is within the top-$k$. We found that a value for $\delta$ that performs well is the maximum of the means and medians of the scores obtained for all instances in $PH(S)$, which we will refer to as $\delta_{m}$. In the experiment, we tested using the different settings $\delta = \delta_m$, $\delta = 1.0$, $\delta = 0.95$, $\delta = 0.9$ and $\delta = 0.85$. Also, we evaluated different top-$k$ settings, where only the top-1, top-2, top-5 and top-10 matches were selected. 

%\textbf{Solution Set Building.} Since the process described in 4.2 builds the pseudo-homonyms set from a group of source instances that collectively represent a class C, we can apply the CRDS function to disambiguate the pseudo-homonym sets: Let S be the set of all pseudo-homonyms sets and R $\in$ S be one of these pseudo-homonyms set, for each instances r in R, we generate a score $\delta$ = CRDS(r, R, S). The solution for a pseudo-homonym set R comprises all instances with a score $\delta$ $\geq$ $\delta_{threshold}$. Thus, our approach may select more than one instance from a pseudo-homonym set to be part of the solution set.
%

\subsection{Optimization}

%\textbf{Purging Outliers.} 
%%Some of the CRDS results are not reliable, since 
%Note that $CRDS$ can normalize even small values to 1. To optimize our results, we eliminate outliers that are below a specific threshold $\phi$, before applying the normalization. A reasonable approximation of $\phi$ is given by the difference between the mean of $\Delta$ and its standard deviation $\sigma$; where $\Delta$ is the set of similarity scores (computed via Equation 4) for all the instances in $PH(S)$. We only consider this heuristic for $\Delta$ with standard deviation $\sigma$ $>$ 0.13, since we empirically observed that cases with small $\sigma$ eliminates correct matches from the solution set.

\textbf{Increasing Efficiency.} When the source dataset is large, the number of pseudo-homonyms sets to consider increases, affecting the computation time of CRDS. Therefore, given $S$, we execute the process described so far sequentially over chunks of instances in $S$ of size $\mu$, where $\mu \geq$ 2. Thus, we execute the CRDS function $|S|/\mu$ times. 
%, where each time we execute it for $\mu$ distinct instances.  
%In order to determine an appropriate value for $\mu$, we evaluated different set sizes. 
We tested the set of sizes \{2, 5, 10, 20, 50, 100\}. Although total time to process $n$ instances was smaller for small $\mu$, the variation is not significant; also, we found that the precision of matches is not affected by this parameter. 

\textbf{Reinforcing Evidences.} Another advantage of using chunks instead of the entire set $S$ is that at every iteration (after processing each chunk), we can select the instance with the highest score and add it as a singleton pseudo-homonym set (a set with one single element) to the set $PH(S)$ of pseudo-homonym sets to be used in subsequent iterations. This extra singleton set acts 
%as a pivot in the function CRDS(r, R, S), increasing 
as additional evidence for the class of interest. 
%In the experiment, we add a (the size of the chunk) singleton sets to $PH(S)$ in every iteration, where we added, in total, a maximum of $\mu$ pseudo-homonym sets. 
%In this way, we give a reasonable amount of evidences, without delaying its performance too much, and thereby we improve the accuracy of our approach. 
