 
\section{Approximated Direct Matching} 
 \begin{enumerate}
 \item  here we introduce the direct matching
 \item we introduce the features
 \item we introduce the similarity function
 \item we introduce the direct match
\end{enumerate}

In this section we introduce our solution for the class-based matching problem. A set of correct matches $M$ can only be determined by class-based matching when the CTP in $M$ can be determined. Although in the real world data the CTP over $M$ may not exist, because real data is heterogeneous ($M$ may have instances from different classes of interest) and incomplete (instances in $M$ may have missing predicates), the class-based method can still be applied by approximating the notion of CTP, as the similarity between instances in the direct match is approximated by using approximate string matching over the IFP. Below we give some definition and then we describe our approximated class-based matching approach.

\begin{definition} Instance Representation -The \emph{instance representation} $IR: \mathbb{G} \times 2^U \rightarrow \mathbb{G}$ is a function, which given a graph $G$ and a set of instances W, yields a set of triples in which $s \in W$ appears as the subject, i.e. $IR(G, W) = \{ (s, p, o) | (s, p, o) \in G, s \in W \}$. 
\end{definition} 

Notice that a representation of a single instance $s$ is given by $IR(G, \{s\})$. 
%We will use the terms instance and instance representation interchangeably from now on. 
For simplicity, we use in this work the outgoing edges $(s, p, o)$ of a resource $s$ to form its representation $IR(G, \{s\})$.
% However, other types of representations that may include incoming edges, as well as more complex structures in the data (e.g. paths instead of edges), are applicable. 

We firstly decompose the instance representation into features, then we elaborate on the similarity used to find correct matches $M$.

\begin{spacing}{0.5}
\begin{definition}Features- Given a graph $G$ and a set of instances $X$ in $G$, we employ the following sets of features:


\begin{itemize}
\vspace{-4pt}
\setlength{\itemsep}{-4pt}
	\item $P(X) = \{p | (s, p, o) \in IR(G, X) \land s \in X\}$,
  \item $D(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in L \}$,
	\item $O(X) = \{o | (s, p, o) \in IR(G, X) \land s \in X \land o \in U \}$, 
	\item $T(X) = \{(p,o) | (s, p, o) \in IR(G, X) \land s \in X \}$.
	\vspace{-4pt}
\end{itemize}
\end{definition}  
\end{spacing}
Intuitively, $P(X)$ is the set of predicates that appear in the representation of $X$, $D(X)$  the set of literals, $O(X)$ the set of URIs, and $T(X)$ is the set of predicate-object pairs.
%For implementing $\sim_S$, we need to capture the similarity between sets of instances $A$ and $B$, which is realized by the function we call $RDS$ as follows:
We now defined a function $RDS(A,B)$ that defines a score of similarity between sets of instances $A$ and $B$ based on their features. This function will be further used in our class-based matching approach.
\begin{multline}
 RDS(A,B)=SetSim(P(A),P(B)) + SetSim(D(A),D(B)) + \\
  SetSim(O(A),O(B)) + SetSim(T(A),T(B)) 
\end{multline} 

We want $SetSim$ to reflect the intuition that two sets $A$ and $B$ that have $n$ features in common should be more similar than two sets with $n-1$ features in common, no matter the number of features both sets may have. 
%Based on Tversky's contrast model \cite{tversky_features_1977}, 
We define $SetSim$\footnote{In our experiments, this $SetSim$ measure beats the common Jaccard and Dice index by a small but consistent margin.}  as follows: 
\begin{equation}
SetSim(A,B)=|A\cap B| - \left(\frac {|A - B| + |B - A|}{2 |A \cup B|}  \right)
\end{equation}

%The RDS(A,B) function (Equation 2) gives a score of similarity between two (set of) instances and is used in the disambiguation process of the pseudo-homonyms sets, which is described next.

\textbf{Class-based approach.} In the disambiguation process, given a set of  instance $S$ and a set of candidate sets $C(S)$, we reduce our problem to the one of finding instances $t$ from each candidate set, i.e. $t \in C(s) \in C(S)$, which is more similar to all  other sets of candidate sets $C(S)^- = C(S) \setminus C(s)$. The $RDS$ function is used to compute this similarity, i.e. $RDS(\{t\}, C(s')), C(s') \in C(S)^-$. This process is depicted in Fig. 3a for the instance $h_{11}$, where it is compared to the candidate sets $H2$ and $H3$. After the similarity is computed for all instances $t$ in this fashion, the instance in each candidate set with highest score is selected as a correct match(Fig. 3b). Instead of using the top-$1$ in the Fig. 3b, SERIMI may return top-$k$ matches.  

%\begin{figure} 
%\vspace{-10pt}
%\centering
%\includegraphics[width=0.46\textwidth]{fig3.png}
%\caption{A) Computing the similarity score for $h_{11}$. B) The similarity score for all resources.} 
% \vspace{-20pt}
%\end{figure} 

The comparisons between $t$ and the other candidate sets $C(S)^-$ is captured by Equation 3 where the individual score $RDS(\{t\}, C(s'))$ is weighted by the cardinality of $C(s)$, such that a $C(s)$ with high cardinality has a smaller impact on the final aggregated measure. We do this to capture the intuition that
% larger pseudo-homonyms sets contain more noise and 
small sets containing only a few representative instances are better representation of the class of interest. 
%consequently we want that resources more similar to singleton pseudo-homonyms set to have a relative higher final score of similarity.
\begin{equation}
URDS(t,C(S)^-)=\sum_{C(s') \in C(S)^-}\frac{RDS(\{t\},C(s'))}{|C(s')|}
\end{equation}

We normalize the results of Equation 3 by the maximum score among all instances as 
\begin{equation}
CRDS(t,C(s), C(S)^-)= \frac{URDS(t,C(S)^-)}{MaxScore(C(s),C(S)^-)} 
\end{equation}
where
\begin{multline}
MaxScore(C(s),C(S)^-) = \\MAX\{URDS (t', C(S)^-) | t' \in C(s) \in C(S)\}
\end{multline}
This yields a score in the range $[0,1]$. 
%, where 0 means not similar and 1 means equal. 
Using this function, an instance $t$ is considered as a correct match if $CRDS(t,C(s), C(S)^-)$ is higher than a defined threshold $\delta$ or its rank is within the $top-K$. In section 6, we evaluate the accuracy of the TOP-K approach in comparison with the Threshold approach.

%We found that a value for $\delta$ that performs well is the maximum of the means and medians of the scores obtained for all instances in $C(S)$, which we will refer to as $\delta_{m}$. In the experiment, we tested using the different settings $\delta = \delta_m$, $\delta = 1.0$, $\delta = 0.95$, $\delta = 0.9$ and $\delta = 0.85$. Also, we evaluated different top-$k$ settings, where only the top-1, top-2, top-5 and top-10 matches were selected. 

%\textbf{Solution Set Building.} Since the process described in 4.2 builds the pseudo-homonyms set from a group of source instances that collectively represent a class C, we can apply the CRDS function to disambiguate the pseudo-homonym sets: Let S be the set of all pseudo-homonyms sets and R $\in$ S be one of these pseudo-homonyms set, for each instances r in R, we generate a score $\delta$ = CRDS(r, R, S). The solution for a pseudo-homonym set R comprises all instances with a score $\delta$ $\geq$ $\delta_{threshold}$. Thus, our approach may select more than one instance from a pseudo-homonym set to be part of the solution set.
%

%\subsection{Optimization}

%\textbf{Purging Outliers.} 
%%Some of the CRDS results are not reliable, since 
%Note that $CRDS$ can normalize even small values to 1. To optimize our results, we eliminate outliers that are below a specific threshold $\phi$, before applying the normalization. A reasonable approximation of $\phi$ is given by the difference between the mean of $\Delta$ and its standard deviation $\sigma$; where $\Delta$ is the set of similarity scores (computed via Equation 4) for all the instances in $C(S)$. We only consider this heuristic for $\Delta$ with standard deviation $\sigma$ $>$ 0.13, since we empirically observed that cases with small $\sigma$ eliminates correct matches from the solution set.

%\textbf{Increasing Efficiency.} When the source dataset is large, the number of pseudo-homonyms sets to consider increases, affecting the computation time of CRDS. Therefore, given $S$, we execute the process described so far sequentially over chunks of instances in $S$ of size $\mu$, where $\mu \geq$ 2. Thus, we execute the CRDS function $|S|/\mu$ times. 
%, where each time we execute it for $\mu$ distinct instances.  
%In order to determine an appropriate value for $\mu$, we evaluated different set sizes. 
%We tested the set of sizes \{2, 5, 10, 20, 50, 100\}. Although total time to process $n$ instances was smaller for small $\mu$, the variation is not significant; also, we found that the precision of matches is not affected by this parameter. 

%\textbf{Reinforcing Evidences.} Another advantage of using chunks instead of the entire set $S$ is that at every iteration (after processing each chunk), we can select the instance with the highest score and add it as a singleton set (a set with one element) to the set $C(S)$ of pseudo-homonym sets to be used in subsequent iterations. This extra singleton set acts 
%as a pivot in the function CRDS(r, R, S), increasing 
%as additional evidence for the class of interest. 
%In the experiment, we add a (the size of the chunk) singleton sets to $C(S)$ in every iteration, where we added, in total, a maximum of $\mu$ pseudo-homonym sets. 
%In this way, we give a reasonable amount of evidences, without delaying its performance too much, and thereby we improve the accuracy of our approach. 
