\documentclass{ba-kecs}
\usepackage{graphicx}
\usepackage{amsmath}

\title{Evaluating a Generalization of the Winkler Extension in the Context of Ontology Mapping}

\author{Maurice Hermans}

\begin{document}

\maketitle

\begin{abstract}
Knowledge systems are commonly based on ontologies which can be heterogeneous, prohibiting the exchange of information between multiple systems. The task of matching takes as input two ontologies and determines as output the relationship. Matching frameworks use, among other techniques, string similarities in order to determine corresponding concepts between two ontologies. This article will evaluate several string similarities on the task of matching concept names of ontologies. For the evaluation numerous metrics as well as a newly proposed extension are used on an ontology mapping dataset and a record matching dataset. This article also proposes an extension based on the Winkler extension, which increases the similarity between strings based on the presence of a common prefix. The proposed extension generalizes this bonus to the longest common substring (LCS) present between a pair of strings. The performed experiments revealed that the proposed LCS extension performs better than the Winkler extension, and when applied to the Levenshtein similarity this function outperforms the evaluated non-hybrid functions.
\end{abstract}

\section{Introduction}\label{sec:introduction}
Modern knowledge systems make use of ontologies, these ontologies are specifically created for the task for which the system is used. Since knowledge systems, that are used in the same domain, can be built according to different specifications and requirements, it is very likely that the utilized ontologies of these systems are heterogeneous. For instance, ontologies can differ with regard to their structure, granularity or scope. This makes it very challenging to exchange data between multiple knowledge systems which do not use the same ontology. The demand to be able to exchange data lead to an increasing interest in the task of matching these ontologies. The task of matching is a critical operation in many fields, such as semantic web, schema/ontology integration, data warehouses, e-commerce etc. This problem of exchanging data originates from the field of databases, which utilize schemas to encode meta data. Present day knowledge systems use ontologies which have more capabilities but the underlying idea is the same.

The task of matching takes as input two ontologies, each consisting of a set of concepts and determines as output the relationship. There are multiple relationships possible e.g. equivalence and subsumption, but in this article we only deal with equivalence. To match two concepts there are numerous characteristics to consider which, when all added together, will determine whether they match or not. One such characteristic is the name of a concept which is exploited by string-based approaches. The task of matching entity names has been explored by a number of communities, including statistics, databases, and artificial intelligence. A matching system uses several similarity measures which exploit different ontology characteristics in order to produce an alignment between ontologies.
\noindent This paper will examine the following research question:
\begin{center}
\textit{To what extend can string similarities, applied to concept names, be improved such that these are better suited for ontology mapping?}
\end{center}

This paper proposes an alternation to the Winkler extension. The paper evaluates the proposed extension as well as other string similarity methods using the dataset created by Cohen et al. \cite{Cohen:2000} and the conference dataset originating from the 2010 Ontology Alignment Evaluation Initiative (OAEI) \cite{OAEI-2010}.

The rest of this paper is structured as follows. Section 2 will provide the reader with the necessary background information of this domain. Section 3 will detail the proposed extension of contemporary methods in this field. In section 4 the experiments performed with the results obtained will be presented. Section 5 will discuss the results obtained in chapter 4 and also propose future research. Section 6 will report the conclusions of the research performed.

\section{Background information}
\subsection{Schemas and ontologies}
The use of schemas originates from the field of databases \cite{Abiteboul:1997}, they are used to encode meta data, which is very useful to retrieve relevant data from a database. Later on ontologies were developed which add more expressive ways to encode the meta data. Both methods are widely used in knowledge systems. There are some important differences and commonalities between schemas and ontologies as described by Shvaiko et al. \cite{Shvaiko:2004}, of which the keypoints are:
\begin{enumerate}
\item Database schemas often do not provide explicit semantics for their data. The semantics are usually specified explicitly at design-time, and frequently are not becoming a part of a database specification, therefore it is not available \cite{Noy:2004}. Ontologies are logical systems that themselves obey some formal semantics, for example ontology definitions can be interpreted as a set of logical axioms.
\item Ontologies and schemas are similar in the sense that (i) they both provide a vocabulary of terms that describes a domain of interest and (ii) they both constrain the meaning of terms used in the vocabulary \cite{Guarino:2004, Uschold:2004}.
\item Schemas and ontologies are found in environments such as the Semantic Web, and quite often in practice, it is the case that they need to be matched.
\end{enumerate}
Ontology mapping frameworks provide knowledge systems with the capacity to exchange information with other knowledge systems which use different ontologies. However before a framework can map ontologies, there are several aspects in which the ontologies have to be consistent, or it should be possible to transform the ontologies in such a way that the aspects become consistent. The several aspects that have to be consistent are described by Euzenat et al. \cite{Euzenat:2001}.
\begin{enumerate}
\item Encoding: being able to segment the representation in characters.
\item Lexical: being able to segment the representation in words (or symbols).
\item Syntactic: being able to structure the representation in structured sentences (or formulas or assertions).
\item Semantic: being able to construct the propositional meaning of the representation.
\item Semiotic: being able to construct the pragmatic meaning of the representation (or its meaning in context).
\end{enumerate}
The focus of this paper will be on syntactic similarities, more specifically string similarities.
\subsection{Matching techniques categorization}
Ontology mapping frameworks exploit multiple ontology characteristics during the matching process, also described by Shvaiko et al. \cite{Shvaiko:2004}. Matching techniques can compare two ontology concepts by utilizing information which describe the concepts themselves, or by investigating other related concepts, thus also exploiting the structure of an ontology. Techniques which utilize the structure of the ontology can be categorized as follows:
\begin{enumerate}
\item Graph-based techniques are graph algorithms which consider ontologies as labelled graphs. The considered ontologies are viewed as graph like structures containing concepts and their inter-relationships. The comparison of a pair of nodes, which represent a pair concepts, within the graph is usually based on their position within the graphs. The intuition behind is that, if two nodes are similar their adjacent nodes might also be similar.
\item Taxonomy-based techniques are also graph algorithms which consider only the specialization relation. The intuition behind this is that \textit{is-a} links connect already similar terms, therefore their neighbouring nodes may also be somehow similar.
\item Repository of structures stores schemas/ontologies and their fragments together with pairwise similarities between them. When new structures are to be matched, they are first checked for similarity to the structures which are already available in the repository. The goal is to identify structures which are sufficiently similar to be worth matching in more detail, or reusing already existing alignments.
\item Model-based algorithms handle the input based on its semantic interpretation (e.g., model-theoretic semantics). Thus, they are well grounded deductive methods.
\end{enumerate}
Matching techniques which do not use the structure of the ontologies can use different types of information about the concepts themselves. The techniques which use these different types of information can be divided into several categories:
\begin{enumerate}
\item String-based techniques are often used in order to match names and descriptions of schema/ontology concepts. These are the techniques this paper will focus on. These techniques consider strings as sequences of letters in an alphabet. They are typically based on the following intuition: two concepts can be similar if their names are similar. Section \ref{String similarities} will go into further details about the string matching techniques.
\item Language-based techniques consider names as words in some natural language (e.g., English). They are based on Natural Language Processing techniques exploiting morphological properties of the input words.
\item Constraint-based techniques are algorithms which deal with the internal constraints being applied to the definitions of entities, such as types, cardinality of attributes, and keys.
\item Linguistic resources such as common knowledge or domain specific thesauri are used in order to match words based on linguistic relations between them (e.g., synonyms, hyponyms). In this case names of schema/ontology entities are considered as words of a natural language.
\item Alignment reuse techniques represent an alternative way of exploiting external resources, which contain in this case alignments of previously matched schemas/ontologies.
\item Upper level formal ontologies can be also used as external sources of common knowledge. The key characteristic
of these ontologies is that they are logic-based systems, and therefore, matching techniques exploiting them can be based on the analysis of interpretations.
\end{enumerate}
The listed techniques all have strengths and weaknesses with regard to the different heterogeneities which can exist between two ontology concepts. For instance a technique which uses linguistic resources can easily detect synonymous concepts but will be unable to handle concepts whose names contain spelling errors. Thus a combination of different techniques will be required to cope with all types of heterogeneities.
\subsection{Problem statement}
The focus of this paper is string similarities, more specifically how they can be used to map concepts for ontology matching. Ontology mapping frameworks return as a result an alignment, an alignment is a set of mapped concepts. The matching operation will produce such an alignment evaluating all concept pairs returning only the mapped concept pairs which received the highest confidence of having the same meaning. In this paper these alignments are obtained by only using string similarities on the names of the concepts within the two ontologies being mapped. For this task an extension to existing similarities is proposed and all the similarities are benchmarked on two datasets, one containing ontologies and the other containing record matching data. For example, consider two knowledge systems containing records from a population, each record containing characteristics such as name, address, age, and gender. The matching techniques this paper will evaluate will then use string similarities on the names contained in the records to assign a score between all possible pairs of records. These values will then be used to assess which pairs of records are considered equal and therefore will be used in the alignment.
\subsection{String similarities}\label{String similarities}
Distance functions map a pair of strings $ s $ and $ t $ to a real number $ r $ where smaller values indicate a higher similarity between $ s $ and $ t $. Similarity functions are the compliment of distance functions, where higher values of $ r $ indicate a higher similarity which indicates a lower distance between the pair of strings. To avoid confusion to the reader the value $ r $ is the one defined by similarity functions. The metrics used to determine string similarities can be split up in multiple categories depending on their underlying logic to compare strings. First there are metrics which look at the number of edit operations needed to transform one string into another for example the \textit{Levenshtein similarity} \cite{Levenshtein:1966}. Then there are metrics which look at the number of matching characters in both strings for example the \textit{Jaro similarity} \cite{Jaro:1989}. A well known extension to this similarity is the \textit{Winkler extension} \cite{Winkler:1990} which adds a favourable rating to pairs of strings which share a common prefix. Another category of metrics are token based, strings are first split up into tokens, for example the \textit{Jaccard similarity} \cite{Jaccard:1908}. There are also metrics which combine multiple similarities to assign scores to pairs of strings, these are called hybrid similarity functions.
\subsubsection{Levenshtein}
One important subclass of distance functions are \textit{Edit-distance functions}, which use the number of edit operations required to convert string $ s $ to string $ t $. The considered operations are character insertion, deletion, and substitution. Each of these operations will have a cost assigned to them. The costs assigned to an operation can be static or trained. We will consider the simple \textit{Levenshtein distance} \cite{Levenshtein:1966} which assigns a unit cost to each of the edit operations, with the extension that the returned value is normalized by the length of the longest string. Let $ i $ be defined as one of the three considered operations which are insertion, deletion and substitution, $ c_{i} $ be the cost of performing operation $ i $ and $ x_{i} $ the number of times operation $ i $ is performed then the \textit{Levenshtein distance} is defined as:
\begin{equation}
Levenshtein(s,t) = \sum_{i = 0}^{3} c_{i} \cdot x_{i}
\end{equation}
\subsubsection{Jaro}
The \textit{Jaro similarity} \cite{Jaro:1989} is not based on edit operations but determines its similarity by looking at the number of matching characters between two strings and their relative position. Given two strings $ s = a_{1},a_{2}...a_{K} $ and $t = b_{1},b_{2}...b_{L}$, define a character $a_{i}$ in $ s $ to be common with $ t $ when there is a $b_{j} = a_{i}$ in $ t $ such that $i - H \leq j \leq i + H$, where $H = \frac{\min \mid s\mid, \mid t\mid}{2}$. Let $ s' = a'_{1}, a'_{2}...a'_{K'} $ be the characters in $ s $ which are common with $ t $ (in the same order they appear in $ s $) and let $ t' = b'_{1},b'_{2}...b'_{L'} $ be analogous; now define a \textit{transposition for} $ s',t' $ to be a position $ i $ such that $ a'_{i} \neq b'_{i} $. Let $ T_{s',t'} $ be half the number of transpositions for $ s' $ and $ t' $. The \textit{Jaro similarity} is defined as:
\begin{equation}
Jaro(s,t) = \frac{1}{3} \cdot \left(\frac{\mid s'\mid}{\mid s\mid}+\frac{\mid t'\mid}{\mid t\mid}+\frac{\mid s'\mid - T_{s',t'}}{\mid s'\mid}\right)
\end{equation}
\subsubsection{Jaro-Winkler}
A very well known extension to the \textit{Jaro similarity} is the \textit{Winkler extension} \cite{Winkler:1990}. This extension uses the length of the of the longest common prefix of $ s $ and $ t $ to assign more favourable ratings to pairs of strings which contain identical prefixes. This extension can be used in combination with any similarity but is most commonly used with the \textit{Jaro similarity}. Let $ P $ be the longest common prefix, then define $ P' = \min(P,4) $ then the \textit{Jaro-Winkler similarity} is defined as:
\begin{equation}
\begin{split}
Jaro\textit{-}Wink&ler(s,t) = \\
&Jaro(s,t) + \frac{P'}{10} \cdot (1-Jaro(s,t))
\end{split}
\end{equation}
\subsubsection{Jaccard}
This similarity is a token-based distance measure, which can be applied to strings which have been preprocessed into tokens, called tokenization. Tokenization is the process of demarcating and possibly classifying sections of a string of input characters. The strings to be compared are considered to be multisets of tokens. The \textit{Jaccard similarity} \cite{Jaccard:1908} between two token sets $ S $ and $ T $ is defined as:
\begin{equation}
Jaccard(S,T) = \frac{\mid S \cap T\mid}{\mid S \cup T\mid}
\end{equation}
\subsubsection{SoftTFIDF}
Some background information is required in order to fully detail the \textit{SoftTFIDF similarity}. The \textit{TFIDF} \cite{SparckJones:1988} weighting scheme, widely used in the information retrieval community, is a weighting scheme for document vectors for the purpose of increasing the relevance of retrieved documents. On such a vector a \textit{cosine similarity} is applied to obtain the most relevant documents, the alignment. The \textit{TFIDF} measure depends, like the \textit{Jaccard similarity}, on common elements between the two sets, which in this case are weighted. The weights assigned to tokens $ w $ are larger when those tokens are rare in the collection of strings from which $ S $ and $ T $ are drawn. The \textit{TFIDF} similarity can be defined as:
\begin{equation*}
TFIDF(S,T) = \sum_{w\in  S\cap T} V(w,S)\cdot V(w,T)
\end{equation*}
Where $ TF_{w,S} $ is the frequency of token $ w $ in $ S $, $ N $ is the size of the "corpus", $ IDF_{w} $ is the inverse of the fraction of names in the corpus that contain $ w $, $ V'(w,S) = \log(TF_{w,S} + 1) \cdot \log(IDF_{w}) $ and $ V(w,S) = V'(w,S) / \sqrt{\sum_{w'} V'(w,S)^{2}} $. The implementation by Cohen et al. \cite{Cohen:2003} measures all document frequency statistics from the complete corpus of names to be matched.

The SoftTFIDF similarity, proposed by Cohen et al. \cite{Cohen:2003}, extends the notion of $ S\cap T $ such that it includes tokens which are similar according to a secondary similarity function. Since it utilizes a secondary similarity function, denoted as $ sim' $, the SoftTFIDF can be categorized as a \textit{hybrid similarity function}. Let $ CLOSE(\theta,S,T) $ be the set of tokens $ w \in S $ such that there is some $ v \in T $ such that $ dist'(w,v) > \theta $, and for $ w \in CLOSE(\theta,S,T) $ and let $ D(w,T) = \max_{v\in T}dist(w,v) $, then the SoftTFIDF similarity is defined as:
\begin{equation}
\begin{split}
SoftTFIDF(S,T) = &\\
\sum_{w\in CLOSE(\theta,S,T)}& V(w,S) \cdot V(w,T) \cdot D(w,T)
\end{split}
\end{equation}

\section{Proposed extension}\label{Proposed extension}
The proposed extension is mainly focused on ontology mapping but will also be benchmarked on another dataset containing record matching data. This extension came to mind when studying the datasets in the field of ontologies, since concepts defined there are very likely to have high similarity because of the intuition when naming the concepts. To clarify this see the figure below, which is a small part of two ontologies in the OAEI dataset.
\begin{figure}[h]
\centering
\includegraphics[width=8.5cm]{Ontologies.png}
\caption{Two partial ontologies from the OAEI dataset}
\label{fig:ontologies}
\end{figure}
\\
These two ontologies are part of the matching task on the OAEI-conference dataset. It is immediately clear to a human inspecting these two ontologies that the \textit{Conference} and \textit{Conference\_volume} are the same concept as well as the \textit{Document} concept is the same as the \textit{Conference\_document} concept and the \textit{PaperAbstract} is equal to the \textit{Abstract}. Of the three correct pairs the \textit{Winkler} extension to a similarity only takes into account prefixes when comparing two strings, so it will only increase the rating of the \textit{Conference}-\textit{Conference\_volume} pair. This is of course a desired increase in rating for that pair however when looking at the remaining two correct concept pairs there is a missed opportunity since those share a common substring as well. More favourably would be an extension which also increases ratings for pairs with common prefixes as well as other substrings, which leads to proposition of this extension. Let $ sim $ denote the similarity used as basis for the extension, $ LCS $ be the length of the longest common substring and $ S $ a scaling factor, which will be detailed in section \ref{sec:scaling}, of the contribution of the LCS on top of the basis similarity rating, the LCS extension is then defined as:
\begin{equation}\label{eq:extension}
\begin{split}
LCS\textit{-}Exten&sion(s,t) = \\
&sim(s,t) + (LCS \cdot S )(1-sim(s,t))
\end{split}
\end{equation}

The extension researched utilizes the measure of the longest common substring and will be referred to as the LCS-Extension (Longest Common Substring). Where the \textit{Winkler extension} is limited to common prefixes, this extension will give more favourable ratings to pairs of strings which share a long common substring which also includes prefixes. Unlike the Winkler extension which limits the bonus to a maximum of four characters this extension will not limit the bonus of the length of the LCS. Since this is an extension to a similarity it should give a more favourable rating when there is a common substring present in a pair of strings added to the rating the of primary similarity. Since this extension is a generalization of the \textit{Winkler extension} the formula incorporating the bonus is based on the \textit{Winkler formula} as well. The major difference being on how to adept the scaling of the bonus, since this extension does not limit its bonus to a maximum of four characters. The intuition being that the scaling should be most dependant on the length of the shortest string that is being compared. The proposed extension will be evaluated using different similarities as a basis.

\section{Experiments}
\subsection{Datasets}\label{sec:datasets}
To compare the proposed extension with the other similarities discussed in section \ref{String similarities}, multiple datasets are used. The first dataset being a dataset containing ontologies and the second being a dataset created for evaluating record matching metrics. The ontology dataset is obtained from the OAEI-2010 conference \cite{OAEI-2010}. This dataset is a collection of ontologies describing the domain of organising conferences. Results can be evaluated automatically against a reference alignment which was created by a domain expert. The record matching dataset is used to see whether the extension also works for data on which the intuition described in section \ref{Proposed extension} is not clearly present. This dataset was created by Cohen et al. \cite{Cohen:2003} to compare string distance metrics. It contains a suite of labelled entity-name matching problems. The fields from which the names originate vary from animal names to business names to a list of names of restaurants.
\subsection{Blocking methods}\label{sec:blocking}
When evaluating a similarity measure it is preferred to compute all pairwise similarities between two ontologies. This can result in large lists which are not computationally feasible. Therefore it is desired to pre-process the data, using so called blocking methods. For this research one of the blocking methods proposed by Cohen et al. \cite{Cohen:2003} has been applied, the n-gram blocker. A general example illustrating the intuition behind blocking; in statistical record linkage, it is common to group records by some variable which is known a priori to be usually the same for matching pairs. For example when matching records containing address information it is common to only consider pairs which have the same zip code.

The datasets used in this paper only consist of pairs of strings to be matched so there is little prior information to pre-process. To block this data, knowledge-free approaches are needed to reduce the number of considered pairs. This however can result in correct matches being removed from the list of considered pairs. The \textit{blocking method} applied on the two sets $ A $ and $ B $, only considers the pairs of strings $(s,t) \in A \times B $ such that $ s $ and $ t $ share some substring $ v $ which appears in at most a fraction $ f $ of all names. This method is called the token blocker. The second method considered for blocking the data is using n-grams, it only considers strings which share an n-gram. For both the moderate-size datasets used for benchmarking, the n-gram blocker with $ n = 4 $ is used. On the record matching dataset the token blocker finds between 93.3\% and 100.00\% of the correct pairs, with an average of 98.9\%. The 4-gram blocker also finds between 93.3\% and 100.0\% of the correct pairs, with an average of 99.0\%.
% It can be the case that a similarity is used to detect duplicate entries in a list. However the data is already partitioned into two mutually exclusive lists which reduces the number of pairs to be considered.

\subsection{Evaluation}
The similarities will all be evaluated using \textit{precision-recall} values. These values are defined, in terms of true positives (tp), false positives (fp) and false negatives (fn), of a retrieved list with regard to a reference list, as follows:
\begin{equation*}
Precision = \frac{tp}{tp+fp}
\end{equation*}
\begin{equation*}
Recall = \frac{tp}{tp+fn}
\end{equation*}
Precision and recall are set-based measures, which evaluate the quality of an unordered set of retrieved ontology concepts according to their correctness and completeness. To evaluate ranked lists, precision can be plotted against recall after each retrieved concept. In the experiments performed in this paper the individual concept precision values are interpolated to a set of standard recall levels (0 to 1 with increments of 0.1). The rule to obtain the precision value at recall level $ i $ is to use the maximum precision obtained from the concept for any actual recall level greater than or equal to $ i $. Note that the non-interpolated precision is not defined at a recall level of $ i = 0 $ as opposed to the interpolated precision at recall level of $ i = 0 $.
\subsubsection{Methodology}
Before any of the similarities are evaluated on the datasets, these are blocked by a 4-gram blocker. The reason however is not because of the computational complexity, described in section \ref{sec:blocking}, of the evaluation but to increase the number of true positives with respect to total number of matches.
Otherwise the precision values obtained at recall values of 0.2 and higher would already be lower then 0.1, which is too low to effectively compare the similarities. Note that this would suggest that the \textit{blocking} of data has more effects then only reducing the computational complexity of the problem.

Once the datasets are selected they are all blocked and then the similarity metrics are used to obtain a ranking between the concept pairs. For each individual task the precision vs recall values are calculated according to the method described in the previous section. The obtained individual results are then averaged to one precision vs recall graph.
\subsection{Results}
Multiple experiments were performed with the extension, the first being a sensitivity analysis of the scaling factor in the proposed extension. The second experiment is an evaluation of the proposed extension vs the Winkler extension. And the last experiment is a comparison of all the similarities detailed in section \ref{String similarities} and the proposed extension. All the experiments were performed on the datasets discussed in section \ref{sec:datasets}.
\subsubsection{Scaling optimization}\label{sec:scaling}
This experiment will determine how much effect the scaling of the LCS bonus has on the performance of the similarity. As explained in section \ref{Proposed extension} the intuition for the scaling is that the longest common substring should contribute a large amount to the final score. However the scaling can not be omitted because that would mean that the similarity would return that two strings are equal when one string, in its whole, is a substring of the other. The proposed scaling called $ S $ in equation \ref{eq:extension} is $ \frac{1}{L} \cdot w $ where $ L $ is the length of the shortest string and $ w $ is the weight of how much the similarity evaluates two strings being the same when the whole shortest string is a substring of the other string. The optimal weight $ w $ is obtained with the \textit{Jaro similarity} and \textit{Levenshtein similarity} as basis. In the figures of this section, showing the effect of changing $ w $, values of $ w < 0.5 $ are omitted for readability of the graphs. These values were also tested but showed minimal difference with $ w = 0.5 $.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{JaroLCSonConference.png}
\caption{Jaro-LCS tested with different weights on the conference dataset}
\label{fig:Jaro-LCS on Conference}
\end{figure}
In figure \ref{fig:Jaro-LCS on Conference} it can been seen that the weight of 1 displays a strong negative effect when evaluating two strings where one entire name is a substring of the other with high bonuses. When applying a weight of 0.9 the bonus appears to impair the performance at recall values up until a recall of 0.6 as well. For recall values of 0.6 and higher it seems to perform slightly better than the lower weights. The difference in performance at low recall levels however is substantial compared to the improvement in performance at higher recall levels making slightly less weights favourable overall.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{JaroLCSonCohen.png}
\caption{Jaro-LCS tested with different weights on the Cohen dataset}
\label{fig:Jaro-LCS on Cohen}
\end{figure}
The next comparison of weights will be done on the record matching dataset of Cohen. As seen in figure \ref{fig:Jaro-LCS on Cohen} the higher weights do benefit the \textit{Jaro-LCS similarity} for the Cohen dataset whereas the weight of 1 does indeed negatively influence the performance especially at the lower recall values. The weight of 1 does seem to recover later on at recall values of 0.6 and higher but then again the weight of 0.9 performs better at the lower values of recall and almost equal at higher recall values.

Considering the results obtained from both the conference and records matching dataset, all weights except 1 perform equally up to a recall level of 0.2. The overall best performing weights would be 0.6 to 0.8. Where the 0.6 weight performs slightly better at recall levels 0.4 up to 0.6, and the 0.7 and 0.8 weights performs slightly better at recall levels higher then 0.6.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{LevenshteinLCSonConference.png}
\caption{Levenshtein-LCS tested with different weights on the conference dataset}
\label{fig:Levenshtein-LCS on Conference}
\end{figure}
The next two comparisons are the weight performances for the \textit{Levenshtein-LCS similarity}. The extension with \textit{Levenshtein} as basis seems to show the same tendencies as the \textit{Jaro} basis, as seen in figure \ref{fig:Levenshtein-LCS on Conference}. Where the weight of 1 performs poorly and the weight of 0.9 performs moderately over all recall levels except at a recall level of 1 which is not really relevant because the produced alignment would be poor. The weight of 0.8 seems to follow the lower values when looking at performance, but doing slightly worse at lower recall and slightly better at higher recall.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{LevenshteinLCSonCohen.png}
\caption{Levenshtein-LCS tested with different weights on the Cohen dataset}
\label{fig:Levenshtein-LCS on Cohen}
\end{figure}
For the record matching dataset, seen in figure \ref{fig:Levenshtein-LCS on Cohen}, the similarity seems to again prefer higher weights with the exception of a weight of 1. This weight seems to favour the LCS bonus too much to benefit the \textit{Levenshtein similarity}. However the weights of 0.8 and 0.9 seem to perform best over all recall levels having a 8\% better performance at a recall level of 0.8. 

The results obtained on both datasets for the \textit{Levenshtein-LCS similarity} show the same tendencies as for the \textit{Jaro-LCS similarity}, the weights of 0.9 and 1 perform poorly on the conference dataset. Only this time the weight of 1 also performs poorly on the Cohen dataset, and only the weight of 0.9 outperforms the lower weights slightly. Overall the improvement in performance of lower weights on the conference dataset is substantial compared to the decrease in performance on the Cohen dataset, making the weight of 0.8 the most favourable weight. Concluding, since both similarities show the same tendency, that the weight of 0.8 is most suited for incorporating the bonus of the LCS on a similarity.

\subsubsection{Comparison with the Winkler extension}
This experiment will compare the \textit{Winkler extension} with the proposed \textit{LCS extension}. This will show whether the intuition behind the proposed extension leads to a better performance than the more specific \textit{Winkler extension} on which the \textit{LCS extension} is based. The weight used for the \textit{LCS extension} is 0.8 after reviewing the results obtained from the experiments in the previous section. Both extensions will be applied to the \textit{Jaro} and \textit{Levenshtein} similarities to get a full overview to see which extension performs the best of the two. Note that the \textit{Winkler extension} is most commonly used on the \textit{Jaro} similarity.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{WinklerVsLCSonConference.png}
\caption{Comparison of the Winkler extension vs the LCS extension on the conference dataset}
\label{fig:WinklerVsLcs on Conference}
\end{figure}
The first comparison, seen in figure \ref{fig:WinklerVsLcs on Conference}, is done on the conference dataset. In the recall interval from 0 to 0.4 there is a minimal difference in the performance of all the similarities, neither showing an advantage. In the recall interval from 0.4 to 0.6 the difference in performance is increasing, showing a slight dip of the \textit{Jaro-LCS} similarity at a recall of 0.5 where the other similarities again perform similar. At recall values of 0.6 and higher it appears that the proposed extension displays a superior performance with regard to the \textit{Winkler extension}, especially comparing the metrics according to their basis similarity.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{WinklerVsLCSonCohen.png}
\caption{Comparison of the Winkler extension vs the LCS extension on the Cohen dataset}
\label{fig:WinklerVsLcs on Cohen}
\end{figure}
When comparing the similarities on the record matching dataset, see figure \ref{fig:WinklerVsLcs on Cohen}, the proposed \textit{LCS extension} outperforms both \textit{Winkler extension} based metrics by a substantial percentage. The \textit{Levenshtein-Winkler} similarity performs worse at a recall of 0.1 whereas the \textit{Jaro-Winkler} performs almost similar up until a recall of 0.2. At the remaining recall values the \textit{LCS extension} outperforms the \textit{Winkler extension} by a significant margin, peaking at recall values of 0.8 and 0.9 with more than 0.1 higher precision.

\subsubsection{Comparison with other established measures}
In this experiment, the best performing configuration of our proposed extension is compared to all the discussed similarities in section \ref{String similarities}. The \textit{LCS extension} will be used with as basis the \textit{Levenshtein similarity} since that combination performed best as shown in the previous experiment.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{AllComparisonOnConference.png}
\caption{Comparison of all similarity measures on the Conference dataset}
\label{fig:All on Conference}
\end{figure}
Looking at the comparison on the conference dataset, see figure \ref{fig:All on Conference}, the token based \textit{Jaccard similarity} performs poorly on the recall interval from 0.2 to 1. The \textit{SoftTFIDF similarity}, which is a hybrid function, is doing slightly worse than the edit based distances on lower recall values and recovers on higher recall values. The edit based distances all display a similar performance curve, with some of them performing strictly better. The \textit{Levenshtein-LCS} performs best on the recall values up to 0.5 whereas on the higher recall values the \textit{SoftTFIDF similarity} performs best.

\begin{figure}[h]
\centering
\includegraphics[width=8.7cm]{AllComparisonOnCohen.png}
\caption{Comparison of all similarity measures on the Cohen dataset}
\label{fig:All on Cohen}
\end{figure}
The comparison shown in figure \ref{fig:All on Cohen} is obtained by comparing all discussed similarities on the record matching dataset. It is evident that on this dataset the token-based similarities outperform most of the edit-based distance functions, especially at the lower recall values. The \textit{SoftTFIDF similarity} can fully utilize the combined strength of both types of similarities and outperform all other similarities by a substantial amount. The \textit{Jaccard similarity} outperforms all tested edit-based functions at recall values up to 0.3. For recall values of 0.3 and higher the \textit{Levenshtein-LCS} outperforms all the similarities except \textit{SoftTFIDF}.

\section{Discussion}
The datasets used for this research had to be preprocessed, using an n-gram blocker, to obtain reasonable results. This can result in a benefit for some similarities, some could benefit from this more than others, making the benchmarking a bit biased. Also the conference dataset includes entries with exactly identical names which denote concepts with different meanings. String metrics are unable to overcome this type of heterogeneity, since all string similarities are affected equally by this heterogeneity, this does not prohibit the comparison of the different metrics.
\subsection{Future research}
The proposed extension now uses a bonus score function based on the Winkler extension, further research could be performed on analysing whether there are better equations to incorporate the bonus of the LCS. As well as the basis function on which the extension works best, further research could be done on checking whether there are better performing basis functions in combination with the proposed extension.

%The benchmarks in this article can be extended to include comparisons between all the existing algorithms as hybrid functions. 
%There is a general scheme to use any distance function as a hybrid function as proposed by Monge and Elkan \cite{Monge:1996}.

Monge and Elkan \cite{Monge:1996} proposed a general scheme for a hybrid function, which can serve as a basis for further evaluation of the proposed extension, that combines token-bases aspects with a secondary similarity function. They proposed the following recursive matching scheme for comparing two strings $ s $ and $ t $:
\begin{equation*}
sim(s,t) = \frac{1}{K} \sum\limits_{i=1}^{K} \max\limits_{j=1}^{L} sim'(A_{i},B_{j})
\end{equation*}
First $ s $ and $ t $ are tokenized into substrings $ s = a_{1} ... a_{K} $ and $ r = b_{1} ... b_{L} $ and $ sim' $, some secondary distance function, is defined. Following Monge and Elkan, this will be called a \textit{level two distance function}. The proposed extension could be evaluated using this scheme as a level two distance function and compared to the other similarities.

\section{Conclusion}
The results from the first experiment show that the scaling of the LCS-bonus indeed favours the higher weights, with exception of the 0.9 and 1. This means that the LCS bonus can give a rather high contribution on top of the score of the basis similarity. The best results were obtained by extending the \textit{Levenshtein similarity}. Comparison between the \textit{Winkler extension} with the proposed extension shows that the \textit{LCS extension} gives strictly better results, which were minimal at lower recall values and could increase to over 0.1 difference in precision at higher recall values. The better performance however did not present itself on the conference dataset, which was the basis for the intuition, but rather on the dataset of Cohen. In the comparison between all the well known similarities the \textit{Levenshtein-LCS} function performed overall best of the non hybrid functions only the hybrid \textit{SoftTFIDF} function performed better.

Revisiting the investigated research question from section \ref{sec:introduction}, the performed experiments revealed that string similarities can be used to map a significant proportion of corresponding ontology concepts. While there is room for improvements, for instance by applying the proposed extension, string similarities can not handle all the heterogeneities which can occur for example concepts which have identical names but denote a different concept.

\bibliography{references}

\end{document}