\section{Method}
\label{sec:method}
In order to match instances, we start with one instance $I$ for which we want to find a matching instance, i.e.\ an instance about the same entity. The method we present for instance matching largely makes use of string similarities, which we determine by using various similarity measures, comparing both instances and properties as well as their values, as described in section \ref{ssec:string similarity}. Based on $I$, we try to find matching candidates which we then compare to $I$ using both a simple and a lightweight graph-based method. Potential matches are then presented to the user. This process is shown in Fig.\ \ref{fig:process}.

\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{1.pdf}
\caption{The process of finding matching candidates for an instance}
\label{fig:process}
\end{figure}

\subsection{Loading Instances}
\label{ssec:loading instances}
Different data sources may provide the data in different formats, e.g.\ one source may provide query results as N-triples while another one may provide data as JSON serializations. When loading instances from different data sources, the first important step is to make them comparable. We do so by creating a generic data structure into which we can parse information from an arbitrary source. Instances are then comparable using the methods described in the following. The underlying structure of linked data sources can be seen in Fig.\ \ref{fig:instance}. An instance has two types of properties: data properties and object properties. Data properties are static information that describe it in more detail such as strings or dates. Object properties link an instance to other instances i.e.\ they may either specify the class(es) of an object or relations to other instances.

\begin{figure}[H]
\centering
\includegraphics[width=0.485\textwidth]{3.pdf}
\caption{An example of the structure of instances}
\label{fig:instance}
\end{figure}

\subsection{Finding Matching Candidates}
\label{ssec:finding matching candidates}
The second step of our method comprises the finding of other instances, which are also called matching candidates ($MC$s), from other knowledge bases that might describe the same entity as the original one ($I$). Therefore the available information about $I$ has to be used. In order to retrieve only a limited set of $MC$s we only take the most specific values of $I$ into account. To extract these values from the information about $I$ we defined a set of properties which point to certain values such as the ID, the label or a short description of $I$. The collected set of values ($NS(I)$), which basically contains some readable names of an entity, is the basis for retrieving the $MC$s. Each value in $NS(I)$ is used to retrieve $MC$s from another knowledge base. In order to find instances, we make use of the capabilities of the respective endpoints, which offer some kind of ``contains'' method for strings. At this point the $MC$s are only represented by their unique URI. Based on these URIs, which contain the ID or the name of an instance, it is possible to assign a value to each $MC$ which represents the preliminary probability that it describes the same entity as $I$. This step is reasonable because it is a fundamental agreement in linked data that the URI contains the name of the entity\cite{LinkedData}. The pre-ordering is essential for the deeper analysis as it is not always possible to examine all $MC$s in detail.

\subsection{Comparing Instances}
\label{ssec:comparing instances}
The comparison of instances comprises all steps which are necessary to reduce the $MC$s and to assign a similarity value to each of them. Thus, the result is a limited set of ordered $MC$s which can be returned to the user. In order to compute the similarity values, a \textit{simple matching} which only uses the directly connected properties and values, and an additional, slightly more sophisticated \textit{lightweight graph-based matching} method are proposed (Fig. \ref{fig:process}). The goal is to compute a similarity value for each matching candidate which is reliable and meaningful.\par

The matching candidates are valued and reduced in multiple steps where in each step every $MC$ is compared to $I$ starting with the one that has the highest similarity.
The \textit{simple matching} step implicitly limits the number of $MC$s to twenty by stopping if a similarity measure was computed for at most twenty $MC$s. This is reasonable because it is computationally expensive to apply additional procedures (e.g. \textit{graph-based matching}) to all $MC$s and it is unlikely that further $MC$s, which would initially be ranked below the twentieth position, contain the correct instance.\par

The similarity between instances is based on their property-value pairs because these are the description of a certain entity. The computed similarity is based on two different values which are combined afterwards, namely $nameSim$ and $propSim$ (Fig. \ref{fig:similarity}). The $nameSim$ similarity takes only the readable names of an entity into account. These are extracted from an entity as explained in the previous subsection. The similarity between these two sets is the highest string similarity between a pair of strings (e.g. name, label). The $propertySim$ similarity uses the similarity of property-value pairs between two entities by aligning the properties as well as the values. The similarity value of one property-value-pair is the geometric mean of the property pair ($pSim$) and the value pair ($vSim$) (Fig. \ref{fig:similarity}).
To find the pairs we start by comparing the properties of $I$ and a $MC$ as well as $I$'s values with those of $MC$. That means we compare the properties, which results in the value $pSim$ for property names and $vSim$ for the values respectively. The side (property or value) which has been matched first has to have a relatively high similarity while the other side can have a lower one. For example if we compute $pSim$ first, it has to have a value greater than 0.5 while $vSim$ can have a value just higher than 0.3 in order to be considered in the list of possible property value pairs. To compute $vSim$ we do not differentiate between object and data values. Everything is treated as a string, which is possible by only taking the last part of the URI of objects into account. After all similar combinations are detected, the best ones are selected under the condition that each property is matched to at most one property.\par

The final $propSim$ value is the mean of all detected property-value pairs:
\begin{align}
propSim = \frac{1}{\left|{P}\right|} \cdot \sum\limits_{x \in P}^{}\sqrt{pSim(x) \cdot vSim(x)}\\
\intertext{where $P$ is the set of all matched property value pairs.} \nonumber
\end{align}

After the computation of $nameSim$ and $propSim$ it is possible to combine them into the $similarity$ of the \textit{simple matching} approach by computing a weighted average between these two values where $propSim$ is weighted twofold.

\begin{align}
similarity=\frac{nameSim+2\cdot propSim}{3}
\end{align}

We assume that $propertySim$ is based on at least three property-value pairs, which can for example be the name, a label and the type. Thus, we think it is reasonable to say that identical entities should have that number of identical properties. If less than three property-value pairs for a $MC$ can be found, the $similarity$ value is decreased by a penalty. Conversely, a bonus is added if more than three pairs are detected.\par

\begin{figure}[H]
\centering
\includegraphics[width=0.485\textwidth]{2.pdf}
\caption{Computation of the similarity between two instances. The left part indicates that two sets of readable names are compared. The right illustrates that the properties and values are compared separately. Both parts are combined into one similarity.}
\label{fig:similarity}
\end{figure}

So far the top-20 $MC$s or less have a new computed similarity which is based on the directly connected properties and their values (\textit{simple matching}), and therefore is much more reliable than the preliminary similarity. This list is still too big in order to present it directly to users or to apply further algorithms. Thus, it is necessary to further cut down the list of $MC$s. We do so by crossing out all $MC$s which confidence is not in the interval $[h-0.3,h]$ where $h$ is the highest confidence value occurring in the set of $MC$s.
%have a similarity that is less than the confidence of the $MC$ with the highest confidence minus 0.3.
This means we keep all $MC$s with a similarity above 0.7 if the similarity of the highest rated $MC$ is 0.9.\par

At this point, the results can be really stable, good and the set of matching candidates is limited to an acceptable number. Nevertheless, it is possible to affirm them by taking further connected instances and the graph structure of linked data into account. This is part of our \textit{lightweight graph-based matching} method which follows the \textit{simple matching} approach. Therein, the previously matched property-value pairs are still available and are used to identify similar connected entities. For each pair, it is necessary to check if the property points to another entity for $I$ and $MC$ because it is not possible to compare a data value to an object. If the matched property points to an object for $I$ and the $MC$, the connect values can be loaded and compared. The comparison procedure for the two retrieved entities is exactly the same as explained in the previous step (\textit{simple matching}). If the two objects are similar, the confidence of the original $MC$ is increased. The actual bonus depends on the average similarity of the matchable objects and is limited to 0.05. This makes sense because the evaluated matching candidates should have a similarity of around 0.9 based on \textit{simple matching} and the directly comparable property-value pairs are more important than the indirectly connected ones.

\subsection{String Similarity}
\label{ssec:string similarity}
We make use of string similarity measures at various points in our algorithms as we mainly determine similarity based on the similarity of two strings. Therefore we use a combination of different measures, namely Cosine Similarity \cite{tan2007introduction}, Jaccard Similarity \cite{jaccard1901distribution}, Jaro-Winkler \cite{winkler1990string}, Levenshtein \cite{levenshtein1966bcc} and Overlap Coefficient \cite{overlapCoefficient}. Each of these measures has different strengths and weaknesses which we balance by weighting the result of the individual algorithm. The weights for each algorithm were already established in an earlier work in the context of ontologies \cite{huber2011codi}. The value of the similarity is within the range from zero to one and can also be interpreted as the probability that the two strings are similar.
