\subsection{Selecting the Threshold} 
\label{sec:threshold}
%After the instances scores are computed, for each candidate set $C(s) \in C(S)$, the correct matches for $s$ have to be selected. SERIMI uses two approaches for this: TOP-K based approach and Threshold-based approach. In the TOP-K based approach, in each candidate set, instances are ranked by their scores and the top K instances are selected. While the K are defined by the data designer, in SERIMI its default value is 1 because we assume that there is 1-to-1 mapping between the source and target instances. 

As discussed, the Top-1 approach can be used when the datasets are duplicate-free. In all other cases, a threshold selection method should be employed. Then, only instances with similarity score above the computed threshold $\delta$ are selected as matches. State-of-the-art methods \cite{DBLP:journals/pvldb/WangLYF11, DBLP:conf/vldb/ChaudhuriCGK07} are supervised, relying on training data to find the best threshold. We propose an unsupervised method, which only uses statistics that can be derived from the computed scores. We cast the problem of threshold selection as the one of finding the statistical outliers among the similarity scores. In particular, we use two bags of scores, one containing only the maximum scores and the other contain all scores. 

\begin{definition}[Bag of Scores] 
Given the candidates $C(S)$ and the CoI $C(S)$, the bag of \emph{all scores} contains a score for every $t \in C(s) \in C(S)$, i.e., $Scores_{all}$ = $\{Sim(t, C(S)) $ $| t \in C(s), C(s) \in C(S)\}$. The bag of \emph{maximum scores} contains a score for every $C(s) \in C(S)$, i.e., $Scores_{max} = \{MaxScore(C(s), C(S)) | C(s) \in C(S)\}$. 
%A score distribution assigns a probability to every score. We distinguish the distribution derived through repeated sampling from $Scores_{all}$, referred to as \emph{all-score distribution}, from the distribution derived from $Scores_{max}$, called \emph{max-score distribution}. 
\end{definition}

The maximum scores constitute the starting point for threshold selection. Intuitively speaking, two cases can be distinguished: First, (1) we have maximum scores that all are close to 1, and differences among them are small. (2) In the second case, there are large variations among scores. Some of them are low, approaching 0. 

Note the first case corresponds to the setting where correct matches are easy to find, i.e., at least one candidate with score close to 1 could be found for every source instance. In this case, $\delta$ is simply defined based on the minimum score in $Score_{max}$. In this way, all candidates with score in $Score_{max}$ are selected. This strategy works for this ``easy setting'' because due to the use of set-based similarity in class-based matching, score differences among correct matches tend to be small while differences between correct and incorrect ones are much larger. Thus, incorrect matches typically have scores much lower than the minimum score in $Score_{max}$. 

In the second ``harder setting'', ``bad'' candidates were detected, i.e., those with low scores in $Score_{max}$. This indicates that for some source instances, no correct candidates exist or could be found.  
%While the ones with Notice that other correct matches with score below this threshold can exist, but it indicates that it has missing features, which lead it to a lower score during the comparisons. 
%This problem can hardly be detect by any existing approach.  
%When the scores in $Score_{max}$ are sparse, it indicates that there are candidate sets with "bad" matches in $C(S)$. 
However, we cannot use the minimum score as before to filter these ``bad'' candidates. It could be too low, or generally, not precise enough to separate correct from incorrect matches. 
%As these "bad" candidate sets lower the score of all possible correct matches (because Sim compare all candidates $t$ with a "bad" set as well), we need find the threshold $\delta$ among $Score_{all}$, because a minimal value in $Score_{max}$ can be much lower than an acceptable threshold.  
To find $\delta$ in this case, we propose to detect outlier scores. For finding outliers more precisely, we use the bag of all scores, $Score_{all}$, instead of $Score_{max}$. Intuitively, candidates that have an outlier score share fewer features in common with the class of interest, thus can be regarded as incorrect. 
%This can be taken as an indication for non-matching. that it may be an incorrect match. The higher outlier score can then be used as a threshold $\delta$ to filter out incorrect matches in every $C(s)$.

As a mechanism to implement the ideas discussed above, we propose to use a method based on the Chauvenet's criterion \cite{chauvenet}, a statistical technique for outlier detection. 
\begin{definition}[Chauvenet's Criterion] Given the \\ mean $\mu$ and the standard deviation $\sigma$ of the scores in $Score_{all}$, a score $x \in Score_{all}$ is an outlier if $Chauvenet(x) < c_1$, 
where
\[
Chauvenet(x) = p(\frac{\mu - x}{\sigma}) \times |Score_{all}|,
\]
$c_1$ is a confidence level\footnote{Typically, it is set to 0.5 when using Chauvenet's criterion.}  and $p(\frac{\mu - x}{\sigma})$ is the probability\footnote{We assume a normal distribution.} of observing a data point $x$ that is $\frac{\mu - x}{\sigma}$ times standard deviations away from the mean.  

According to the Chauvenet's criterion, there are no outliers when $\sigma< c_2$, another confidence level that is typically set close to 0.\footnote{In literature, $\sigma< 0.011$ is typically used.} 
\end{definition}

%Our method detects among all $Sim$ scores, those scores that are outliers according to Chauvenet's criterion. Then the threshold $\delta$ is defined to be a value higher than any of those outliers. Intuitively, an outlier represents an instance with a low $Sim$ score; consequently, it is an instance that differ from the other candidate sets and should not be selected as a match.  

Our procedure for threshold selection first extracts the maximum score of each candidate set $C(s) \in C(S)$ to form $Score_{max}$. 
%Then, it computes the standard deviation $\sigma$ of $Score_{max}$. 
When there are no outliers according to the Chauvenet's criterion, we set the threshold as the minimum score in $Score_{max}$. 
%Notice that the value of 0.011 guarantee that we do not consider as outlier as candidate with score higher than 0.989 (because Chaveunet may consider a score higher than 0.989 as outliers, if it is the only one in a long list of data points containing 1.0 scores). 
%In this particular case, by setting $\delta=min(Score_{max})$, we ensure that at least one candidate from each candidate set will be selected as a match.  Otherwise (for $\sigma > 0.011$), we apply Chauvenet as follows. 
Otherwise, 
%We determine the one-tail (below the mean) outlier applying the 
we iteratively apply the Chauvenet's criterion over $Score_{all}$ until no further outliers can be detected: in every iteration, if outliers are found and $\delta$ is the highest score among all outliers, we remove all scores that are smaller than $\delta$ from $Score_{all}$; this pruned bag of scores is then used in the next iteration. The maximum $\delta$ found during this process is used as the threshold. Alg.\ \ref{alg:delta} describes this algorithm.

\begin{algorithm}
\caption{ThresholdBasedSelection($C$).}
\begin{algorithmic}[1]
\scriptsize\tt 
\STATE  $Y \leftarrow getMaxScores(C)$ 
\STATE  $L \leftarrow getAllScores(C)$ 
\STATE  $\delta \leftarrow Array$ 
\IF {$Y.standardDeviation < c_2$}
\RETURN $Y.min$
\ENDIF
\FORALL{$x \in L$}  
\IF {$L.mean - x < 0$}
\STATE continue;
\ENDIF
\IF {$chauvenet(L, x)$}
\STATE  $C' \leftarrow$ remove  all scores $\leq$ x from C 
\STATE  $\delta.add (x)$
\STATE  $\delta.add (ThresholdBasedSelection(C'))$
\RETURN $\delta.max$
\ENDIF
\ENDFOR  
\RETURN 0
\end{algorithmic}
\label{alg:delta}
\end{algorithm}

For example, for the scores in Fig. \ref{fig:computation}, the list of maximum scores $Score_{max}$=\{0.98, 0.23, 1.0\} has a standard deviation much higher than the confidence level $c_2$; therefore, the algorithm is applicable. Considering all scores $Score_{all}$= \{0.98, 0.5, 0.33, 0.07, 0.23, 0.12, 0.22, 1.0, 0.68, 0.24 \}, this algorithm would select as threshold $\delta=0.68$; therefore, all instances with scores smaller than 0.68 would be automatically rejected as a correct match. Notice that 0.68 is much higher than 0.22, the minimal of $Score_{max}$.
%

%Typically, the Chaveunet's criterion is applied only once. 
%%was not designed to be applied multiple times as we did in our algorithm and we have to assume a normal distribution over a list of scores; empirically, 
%In experiments, we however observe that the iterative procedure proposed above yields better results. It is better than applying Chaveunet's criterion only once and better than other methods for outlier detection, such as Rosner \cite{rosner} or Peirce \cite{peirce}. 
%% that does not require a normality over the distribution. We observed that this algorithm performs badly when the source class of instances $S$ is too general, i.e.\ when $S$ contains instances of different classes (e.g.\ country plus city  plus river). 
%
%On Sec.\ \ref{sec:evaluation}, we compare SERIMI's performance using TOP-1 approach and Threshold approach.