\documentclass{llncs}

\usepackage{times}
\usepackage{latexsym}
\usepackage{url}
\usepackage{amssymb,amsfonts,amsmath}
\usepackage{graphicx}


\usepackage{algorithm}
\usepackage{algorithmic}

\newcommand\T{\rule{0pt}{2.6ex}}
\newcommand\B{\rule[-2.0ex]{0pt}{0pt}}

% ontology and components
\def\ONT{{\mathcal O}} % Ontology
\def\TBOX{{\mathcal T}} % T-Box
\def\ABOX{{\mathcal A}} % A-Box

\newcommand{\argmax}{\operatornamewithlimits{argmax}}

% correspondences and mapping
\def\MAP{{\mathcal A}} % Mapping
% \def\IMAP{{\mathcal I}} % Identity Mapping
\def\MAPALL{{\mathbb A}} % set of all mappings
\def\COR#1#2#3{\left\langle #1, #2, #3\right\rangle}

% specific
\def\SUBSE#1#2{subs_1(#1, #2)}
\def\SUBSZ#1#2{subs_2(#1, #2)}
\def\DISE#1#2{dis_1(#1, #2)}
\def\DISZ#1#2{dis_2(#1, #2)}
% general
\def\DIS#1#2#3{dis_#1(#2, #3)}
\def\SUBS#1#2#3{sub_#1(#2, #3)}
\def\SUBSxD#1#2#3{sub^{d}_#1(#2, #3)}
\def\SUBSxR#1#2#3{sub^{r}_#1(#2, #3)}
\def\SUPSxD#1#2#3{sup^{d}_#1(#2, #3)}
\def\SUPSxR#1#2#3{sup^{r}_#1(#2, #3)}
\def\DISxD#1#2#3{dis^{d}_#1(#2, #3)}
\def\DISxR#1#2#3{dis^{r}_#1(#2, #3)}
\def\MAPC#1#2{m_{c}(#1, #2)}
\def\MAPP#1#2{m_{p}(#1, #2)}
% instances
\def\DISASSC#1#2#3{disAssC_{#1}(#2, #3)}
\def\DISASSP#1#2#3#4#5{disAssP_{#1}(#2, #3, #4, #5)}
\def\MAPI#1#2{m_{i}(#1, #2)}


\def\jan#1{\textcolor{darkgreen}{[#1 (Jan)]}}
\def\mathias#1{\textcolor{darkblue}{[#1 (Mathias)]}}


%\def\jan#1{}
%\def\mathias#1{}

\begin{document}

\title{CODI: Combinatorial Optimization for Data Integration -- Results for OAEI 2011}

\author{Jakob Huber \and Timo Sztyler \and Jan Noessner \and Christian Meilicke}

\institute{KR \& KM Research Group\\University of Mannheim, Germany\\
\email{\{jahuber, tsztyler\}@mail.uni-mannheim.de}\\
\email{\{jan, christian\}@informatik.uni-mannheim.de}
}

\date{\today}
\maketitle

\begin{abstract}
%The problem of linking entities in heterogeneous and decentralized data repositories is the driving force behind the data and knowledge integration effort.
In this paper, we describe our probabilistic-logical alignment system CODI (Combinatorial Optimization for Data Integration). The system provides a declarative framework for the alignment of individuals, concepts, and properties of two heterogeneous ontologies. CODI leverages both logical schema information and lexical similarity measures with a well-defined semantics for A-Box and T-Box matching. The alignments are computed by solving corresponding combinatorial optimization problems. 
\end{abstract}

\section{Presentation of the system}

\subsection{State, purpose, general statement}

\looseness=-1 CODI (\textbf{C}ombinatorial \textbf{O}ptimization for \textbf{D}ata \textbf{I}ntegration) leverages terminological structure for ontology matching. The current implementation produces mappings between concepts, properties, and individuals. 
The system combines lexical similarity measures with schema information to completely avoid \textit{incoherence} and \textit{inconsistency} during the alignment process. CODI participates in 2011 for the second time in an OAEI campaign. Thus, we put a special focus on differences compared to the previous 2010 version of CODI.

\subsection{Specific techniques used}\label{sec:techniques}

\looseness=-1CODI is based on the syntax and semantics of Markov logic~\cite{domingos:2008} and transforms the alignment problem to a maximum-a-posteriori optimization problem. This problem needs a-priori confidence values for each matching hypotheses as input. Therefore, we implemented an aggregation method of different similarity measures. Another new feature of CODI is the recognition of ontology pairs belonging to different versions of the same ontology. In instance matching CODI does not compute lexical similarities for all existing pairs of instances but utilizes object-property assertions for reducing the necessary comparisons.

\subsubsection{Markov Logic Framework}

\looseness=-1Markov logic combines first-order logic and undirected probabilistic graphical models~\cite{RD:2006}.  A Markov logic network (MLN) is a set of first-order formulae with weights. Intuitively, the more evidence there is that a formula is true the higher the weight of this formula.  It has been proposed as a possible approach to several problems occurring in the context of the semantic web~\cite{domingos:2008}. 
We have shown that Markov logic provides a suitable framework for ontology matching as it captures both  \emph{hard} logical axioms and \emph{soft} uncertain statements about potential correspondences between entities. The probabilistic-logical framework we propose for ontology matching essentially adapts the syntax and semantics of Markov logic. However, we always \emph{type} predicates and we require a strict distinction between \emph{hard} and \emph{soft} formulae as well as \emph{hidden} and \emph{observable} predicates. 
Given a set of constants (the classes and object properties of the ontologies), a set of formulae (the axioms holding between the objects and classes), and confidence values for correspondences, a Markov logic network defines a probability distribution over possible alignments. We refer the reader to \cite{niepert2010probabilistic,niepert2010uai} for an in-depth discussion of the approach and some computational challenges. For generating the Marcov logic networks we used the approach described in \cite{riedel:08}. Our OAEI paper from last year contains a more technical description of the framework~\cite{noessner2010codi}.

%Given two ontologies $\ONT_1$ and $\ONT_2$ and an initial a-priori similarity measure $\sigma$ we apply the following formalization. First, we introduce observable predicates $O$ to model the structure of $\ONT_1$ and $\ONT_2$ with respect to both concepts and properties. For the sake of simplicity we use uppercase letters $D,E,R$ to refer to individual concepts and properties in the ontologies and lowercase letters $d,e,r$ to refer to the corresponding constants in $C$. In particular, we add ground atoms of observable predicates to $\mathcal{F}^h$ for $i \in \{1,2\}$ according to the following rules\footnote{Due to space considerations the list is incomplete. For instance, predicates modeling range restrictions are not included.}:
%\small
%\begin{align*}
%\ONT_i \models D \sqsubseteq E  & \mapsto \SUBS{i}{d}{e} \\
%\ONT_i \models D \sqsubseteq \neg E              & \mapsto \DIS{i}{d}{e} \\
%\ONT_i \models \exists R.\top \sqsubseteq D & \mapsto  \SUBSxD{i}{r}{d} \\
%\ONT_i \models \exists R^{-1}.\top \sqsubseteq D & \mapsto & \SUBSxR{i}{r}{d} \\
%\ONT_i \models \exists R.\top \sqsupseteq D & \mapsto  \SUPSxD{i}{r}{d} \\
%\ONT_i \models \exists R^{-1}.\top \sqsupseteq D & \mapsto & \SUPSxR{i}{r}{d} \\
%\ONT_i \models \exists R.\top \sqsubseteq \neg D  & \mapsto  \DISxD{i}{r}{d}
% \ONT_i \models \exists R^{-1}.\top \sqsubseteq \neg D & \mapsto & \DISxR{i}{r}{d}
%\end{align*}
%\normalsize
%The ground atoms of observable predicates are added to the set of hard constraints $\mathcal{F}^h$, forcing them to hold in computed alignments. The hidden predicates $m_c$ and $m_p$, on the other hand, model the sought-after concept and property correspondences, respectively. Given the state of the observable predicates, we are interested in determining the state of the hidden predicates that maximize the a-posteriori probability of the corresponding possible world. The ground atoms of these hidden predicates are assigned the weights specified by the a-priori similarity $\sigma$.  The higher this value for a correspondence the more likely the correspondence is correct \emph{a-priori}. Hence, the following ground formulae are added to $\mathcal{F}^s$:
%\small
%\begin{align*}
%(\MAPC{c}{d}, \ \ \sigma(C, D)) & & \mbox{ if C and D are concepts}  \\
%(\MAPP{p}{r}, \ \ \sigma(P, R)) & & \mbox{ if P and R are properties}
%\end{align*}
%\normalsize
%Notice that the distinction between $m_c$ and $m_p$ is required since we use typed predicates and distinguish between the \emph{concept} and \emph{property} type.

\paragraph{Cardinality Constraints}
\looseness=-1A method often applied in real-world scenarios is the selection of a functional one-to-one alignment~\cite{cruz09selection}. Within the ML framework, we can include a set of hard cardinality constraints, restricting the alignment to be functional and one-to-one. 

\paragraph{Coherence Constraints}
\looseness=-1Incoherence occurs when axioms in ontologies lead to logical contradictions. Clearly, it is desirable to avoid incoherence during the alignment process. All existing approaches that put a focus on alignment coherence remove correspondences after computing the alignment. Within the ML framework we can incorporate incoherence reducing constraints \emph{during} the alignment process. 
%This is accomplished by adding formulae of the following type to $\mathcal{F}^h$.
%\small
%\begin{align*}
%\DIS{1}{x}{x'} \wedge \SUBS{2}{x}{x'} \Rightarrow \neg (\MAPC{x}{y} \wedge  \MAPC{x'}{y'}) \\
%\DISxD{1}{x}{x'} \wedge \SUBSxD{2}{y}{y'} \Rightarrow \neg (\MAPP{x}{y} \wedge \MAPC{x'}{y'})
%\end{align*}
%\normalsize
%The second formula, for example, has the following purpose. Given properties $X,Y$ and concepts $X',Y'$. Suppose that $\ONT_1 \models \exists X.\top \sqsubseteq \neg X'$ and $\ONT_2 \models \exists Y.\top \sqsubseteq Y'$. Now, if $\COR{X}{Y}{\equiv}$ and  $\COR{X'}{Y'}{\equiv}$ were both part of an alignment the merged ontology would entail both $\exists X.\top \sqsubseteq X'$ and $\exists X.\top \sqsubseteq \neg X'$ and, therefore, $\exists X.\top \sqsubseteq \bot$. The specified formula prevents this type of incoherence. 

\paragraph{Stability Constraints}
\looseness=-1Several approaches to ontology matching propagate alignment evidence derived from structural relationships between concepts and properties. These methods leverage the fact that existing evidence for the equivalence of concepts $C$ and $D$ also makes it more likely that, for example, child concepts of $C$ and  $D$ are equivalent. One such approach to evidence propagation is \emph{similarity flooding}~\cite{melnik02simflood}. As a reciprocal idea, the general notion of stability was introduced, expressing that an alignment should not introduce new structural knowledge~\cite{meilicke07extraction}. 

%\looseness=-1The presented list of cardinality, coherence, and stability constraints could be extended by additional soft and hard formulae. Other constraints could, for example, model known correct correspondences or generalize the one-to-one alignment to m-to-n alignments. 


\subsubsection{Combination of Different Similarity Measures}

\looseness=-1Compared to last year we improved our lexical string similarity measures significantly. In a first step we collect and standardize all string information like ids, labels and annotations from the entities. During the standardization process we split tokens into separate words if necessary (e.g. \emph{hasAuthor} is transformed to \emph{has} \emph{Author}), replace special characters with spaces, and remove few words like \emph{a} or \emph{the} according to a stop-words list. 
%First of all we collect and standardize all string information from the entities. During the standardization process each string is analyzed, so the program replace some symbols with a space and try to split up one string to several words. In the last step we replace single words which hasn't any impressive meaning.\\

\looseness=-1Furthermore, the functionality of computing string similarities has been improved. CODI is able to combine several string similarity measures by taking the average, the maximum or by weighting each measure with a specific predefined weight. These weights could be learned with machine learning algorithms. In the standard configuration CODI combines the Cosine, Levenshtein, Jaro Winkler, Simth Waterman Goto, Overlap coefficient, and Jaccard similarity measures\footnote{Implemented in http://sourceforge.net/projects/simmetrics/.} with specific weights.
%In the main part we compare each possbile string pair with several different similarity measures. We use the following measures Cosine, Levensthein, Jaro Winkler, Simth Waterman Goto, Overlap coeffiecient and Jaccard. Each measure have an indivual weight, that allows as to get an combined confidence value of each string pair. It is also possible to obtain the maximum value of all measures.\\


\subsubsection{Matching different Ontology Versions}

\looseness=-1A specific task in ontology matching is the alignment of different versions of the same ontology. The test cases of the benchmark track can be seen as an example for this kind of task. In the following we argue that (a) matching versions requires a different approach compared to a standard matching task, and (b) that, therefore, it is required to detect automatically that two ontologies are different versions of the same ontology.

\looseness=-1\textbf{(a)} Suppose that $\ONT$ and $\ONT'$ are versions of the same ontology. Further, let $\ONT$ contain less concepts and properties than $\ONT'$. Then it is highly probable that many or nearly all entities in $\ONT$ have a counterpart in $\ONT'$. A good one-to-one alignment will have, thus, as many correspondences as there are entities in $\ONT$. Based on this assumption it makes sense to lower the threshold or to use a structural measure in addition to the computation of string-based similarities. In particular, we apply the following measure.

\looseness=-1We first calculate the number of subclasses $\#sub$, superclasses $\#sup$, disjoint classes $\#dis$, and domain- and range-restrictions ($\#dom$ and $\#ran$) for a specific concept $C$. These results are then used to calculate a similarity. For example, given $C \in \ONT$ and $D \in \ONT'$ we have $sim_{\#sub}(C, D)= (1+min(\#sub(C), \#sub(D))) / (1+max(\#sub(C),\#sub(D)))$. The overall similarity $sim(C, D)$ is then computed as weighted average over all different similarity values for each of $\#sub$, $\#sup$, $\#dis$, $\#dom$, $\#ran$. 

\looseness=-1The resulting similarity measure is highly imprecise, but has a high recall if we apply it to two ontologies with high structural similarity. Whenever there is a high probability that the two input ontologies are versions of the same ontology, we add for each concept $C$ the top-k counterparts $D$ with respect to $sim(C, D)$ as matching hypotheses with low confidence to the optimization problem (same for properties). This approach sounds quite drastic, but keep in mind that there are anchor-correspondences generated by our string-based measures and constraints that interact and result in a meaningful final alignment. %Unfortunately, the computational complexity of the Markov logic problem would become untraceable when we would just give every correspondence pair a minimum value. Thus, we pre-select promising correspondences with our structural measure.

\looseness=-1\textbf{(b)} In order to determine whether two ontologies are versions of each other, we apply the Hungarian method on the input generated by our structural measure. The Hungarian method finds an optimal one-to-one alignment $\MAP_{opt}$. Now suppose that we match an ontology on itself. The number of correspondences in $\MAP_{opt}$ is then equal to the number of entities in the ontology, i.e., $\MAP_{opt}$ has a full coverage. Moreover, the total of confidences $\sum_{c \in \MAP_{opt}} conf(c)$ will be $|\MAP_{opt}|$. In general, we assume that $\sum_{c \in \MAP_{opt}} conf(c)$ divided by the size of the smaller ontology is close to $1$ for versions of the same ontology. In particular, we treat each pair of ontologies as versions if the measured value is above $0.9$. 

%In case we know that two ontologies have the same origin, we assume that we miss correspondences, if we just apply lexical similarities (and the CODI results of last years OAEI benchmark track support this). Consequently, we also need to consider entity pairs which have a low lexical similarity. Unfortunately, the computational complexity of the Markov logic problem would become untraceable when we would just give every correspondence pair a minimum value. Thus, we pre-select promising correspondences with our structural measure.


\subsubsection{Instance Matching}

\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{instanceGraph.eps}
\caption{Process of Selecting Individuals for Computing their Lexical Similarities with $thres=0.7$.}
\label{graph} 
\end{figure}

\looseness=-1In real-world instance matching tasks we are often faced with data sources containing a large amount of instances. Hence, it is obvious that computing the lexical similarity for every pair of these instances is not suitable. We implemented an approach which utilizes object-properties to determine the instances for which the similarity should be computed. Our approach assumes that we have one common TBox and two different ABoxes. Consequently, we assume that both TBoxes have been integrated beforehand.  

\looseness=-1In a first step we compute \emph{anchor}-alignments. Therefore, we compare a small subset of all individuals with each other (e.g. all individuals which are asserted to a specific concept like $Film$), compute their lexical similarities $lexSim$, and add those to the anchor-alignments if their respective similarities are above a threshold $thres$. Then, we take the first anchor-alignment $a$. For all individuals which are connected with an object-property-assertion with one of the individuals in the alignment $a$ we again compute the lexical similarity $lexSim$. We add them to the \emph{end} of the anchor-alignments if $lexSim$ is higher than the threshold $thres$. Figure \ref{graph} visualizes this process. The anchor-alignments is a unique set, which means that only new alignments are added. We repeat this procedure for the second, third, and all following anchor-alignments until we went through the whole set.


\looseness=-1The lexical similarity $lexSim$ is computed as described in \cite{noessner2010codi}. However, we integrated coherence checks as proposed by \cite{noessner2010leveraging} in order to avoid inconsistent alignments. Comparisons can be further reduced, by omitting those individual pairs which have no asserted inferred concept in common.

\looseness=-1This basic idea is extended by some post-processing steps. For catching correspondences which are not connected with an object-property-assertion, we compare all remaining individuals which do not yet occur in the anchor-alignment and add them if their lexical similarity $lexSim$ is above $thres$. At the end, a greedy algorithm for computing a one-to-one alignment is applied. 

\looseness=-1These techniques reduce the runtime significantly on large instance-matching benchmarks. 

\subsection{Adaptations made for the evaluation}

\looseness=-1Prior to each matching task, CODI automatically analyzes the input ontologies and adapts itself to the matching task. The first distinction is based on the use of OBO constructs. If this is the case CODI automatically switches to a setting optimized for matching biomedical terms. The main difference in this setting is the use of a different similarity measure which exploits the fact that in medical domains the order of words is often transposed. The measure basically splits the two strings in two sets of words and computes the largest common subset of these sets relative to the smaller one. 

\looseness=-1If this is not the case CODI checks if the ontologies might be versions of the same ontology. This test does not always correctly discriminate and we sometimes do not detect that two ontologies are different version of the same ontology resulting in poor performance for some of the benchmark test cases.

\subsection{Link to the System}

\looseness=-1CODI can be downloaded from the SEALS portal via \url{http://www.seals-project.eu/tool-services/browse-tools}.

\subsection{Link to the Set of Provided Alignments}

\looseness=-1The alignments for the tracks \emph{Benchmark}, \emph{Conference}, and \emph{Anatomy} has been created on top of the SEALS platform.
For \emph{IIMB} the alignments can be found at \url{http://code.google.com/p/codi-matcher/downloads/list}

\section{Results}

%\looseness=-1In the following section, we present the results of the CODI system for the individual OAEI tracks. 

\subsubsection{Benchmark Track}
\looseness=-1The benchmark track is constructed by applying controlled transformations on one source ontology. Thus, all test-cases consist of different versions of the same ontology. However, our \emph{adaptive} method for detecting these ontologies only categorize about 50 \% beeing different versions of each other. Especially if their semantic structure is heavily changed (e.g. deleting class hierarchy, etc.) our algorithm fails. Nevertheless, with our adaptive method we were able to improve our $F_1$ score from 0.51 to 0.75 compared to last year. If all test-cases would have been \emph{correctly} categorized as different versions CODI's $F_1$ score would have been 0.83 which is 32 \% higher than last year. For the newly introduced dataset 2 our adaptive setting even produces a slightly higher $F_1$ score of 0.70 compared to the correct assignments. Thus, the structure of some test cases differs so much that it is beneficial to consider them \emph{not} as ontologies of the same version (even if they are). The results are shown in Table \ref{tab:benchmark}.

\begin{table}[ht] 
\vspace{-5mm}
\caption{Benchmark results}
\centering
\begin{tabular}{p{2cm}|p{2cm}|p{2cm}|p{1.8cm}||p{2cm}|p{1.8cm}|}
\hline
				  &\multicolumn{3}{c||}{Dataset 1}&\multicolumn{2}{c|}{Dataset 2}\\
          &\multicolumn{2}{c|}{2011}&2010&\multicolumn{2}{c|}{2011}\\
          &adaptive &correct &     & adaptive & correct\\
\hline
Precision   &0.88 & 0.90 & 0.72 & 0.86  & 0.80\\
Recall      &0.65 & 0.77 & 0.44 & 0.59  & 0.61\\ 
$F_1$ score &0.75 & 0.83 & 0.51 & 0.70  & 0.69\\
\hline
\end{tabular} 
\label{tab:benchmark} 
\vspace{-10mm}
\end{table}


\subsubsection{Conference Track}
\looseness=-1Since the conference dataset contains many trivial correspondences matchers can easily reach a high precision. The challenge of this dataset consists in finding the non-trivial correspondences. Concentrating on these non-trivial correspondences we were able to increase our recall from 0.51 to 0.61 compared to the results of last year and gained 2 \% additional $F_1$ score. In the conference track CODI was able to detect that all ontology pairs are not versions of the same ontology. Consequently, the adaptive and the correctly assigned results are similar (see Table~\ref{tab:conference}). We also made some experiments where we matched the Conference ontologies with the fixed version-setting. We observed a significant loss in precision. This illustrates the importance of an adaptive approach.


\begin{table}[ht] 
\vspace{-5mm}
\caption{Conference results} 
\centering
\begin{tabular}{p{2cm}|p{2cm}|p{2cm}|p{2cm}|}
\hline
&\multicolumn{2}{c|}{2011}&2010\\
          \hline
          &adaptive &correct &  \\
\hline
Precision &0.75 &0.75&0.87\\
Recall    &0.61 &0.61&0.51\\
$F_1$ score &0.66 &0.66&0.64\\
\hline 
\end{tabular} 
\label{tab:conference} 
\vspace{-10mm}
\end{table}

\subsubsection{Anatomy Track}
\looseness=-1Due to our special lexical similarity measure for medical ontologies, we were able to improve our $F_1$ score of last year from 0.794 to 0.879. Currently, our results are better than the best participating system of the OAEI 2010. CODI requires approximately 35min to finish this matching task on a 2.3GHz dual core machine with 8G RAM.
\begin{table}[ht] 
\vspace{-5mm}
\caption{Anatomy results} 
\centering 
\begin{tabular}{p{3cm}|p{2cm}|p{2cm}|} 
\hline
          &2011  & 2010\\
\hline
Precision &0.955 &0.954\\
Recall    &0.815 &0.680\\ 
$F_1$ score &0.879 &0.794\\
\hline 
\end{tabular} 
\vspace{-10mm}
\label{tab:anatomy} 
\end{table}

\subsubsection{IIMB Track}

\looseness=-1The IIMB benchmark is created by applying lexical, semantical, and structural transformation techniques on real data extracted from freebase \cite{iimb_iswc_2011}. The transformations are divided into four transformation categories containing 20 transformations each. The size of the IIMB track heavily increased compared to last year. Each of the 80 existing transformations consist of ontology files with sizes larger than 20 MB. For computing a very basic string similarity for every pair of individuals the runtime explodes to over one hour per test case. With our new instance matching method which only compares related individuals we were able to reduce the runtime to 34 minutes per test-case in average. This runtime includes the time for consistency checking, for computing a functional one-to-one alignment, and for calculating a more sophisticated lexical similarity. 

\looseness=-1Beside the increase in size, the transformations have been made much harder. Thus, comparisons to last year results are not expedient. Table \ref{tab:iimb} summarizes the different results of the CODI system for each of the 4 transformation categories\footnote{In several test cases every supplementary information for individuals has been deleted. These test cases will not be considered in the official OAEI evaluation and, thus, are omitted here.}.

\begin{table}[ht] 
\vspace{-5mm}
\caption{IIMB results} 
\centering 
\begin{tabular}{p{2.5cm}|p{1.8cm}p{1.8cm}p{1.8cm}p{1.8cm}|p{1.8cm}|} 
\hline
Transformations & 0-20  & 21-40 & 41-60 & 61-80 & overall \\
\hline
Precision & 0.93  & 0.83& 0.73 & 0.66  & 0.79 \\
Recall    & 0.78  & 0.59& 0.67 & 0.28  & 0.63 \\ 
$F_1$ score & 0.84& 0.68& 0.64 & 0.36  & 0.66 \\
\hline
\end{tabular} 
\vspace{-10mm}
\label{tab:iimb} 
\end{table}

\section{General comments}

%\subsection{Comments on the results}

\subsection{Discussions on the way to improve the proposed system}

\looseness=-1Improvements in usability by designing a suitable user interface are future steps that have to be taken. Although we focussed this year on the implementation and evaluation of a combination of more sophisticated lexical similarity measures, we think that we still have not exploit CODIs full potential regarding this issue. Last but not least improvements in matching different ontology versions will be subject of next years participation.

\subsection{Comments on the OAEI 2011 procedure}
\looseness=-1The SEALS evaluation campaign is very beneficial since it is the first time that the matchers are publically available for download implementing a common interface.

%\subsection{Comments on the OAEI 2010 test cases}
%The overall quality of the test cases is good. 

\subsection{Comments on the OAEI 2011 measures}
\looseness=-1We encourage the organizers to use semantic precision and recall measures as described in \cite{fleischhacker2010practical}. 

\section{Conclusion}

\looseness=-1This year we improved the lexical similarity measures and developed a methodology for automatically choosing between different settings. Combining these improvements with our Markov logic system from last year, we were able to improve our results for the anatomy, conference, and benchmark track significantly. Furthermore, we developed a new instance matching algorithm, which only computes the similarity of promising instances. With this technique we were able to reduce the runtime of the large instance matching benchmark. 

The strength of the CODI system is the combination of lexical and structural information and the declarative nature that allows easy experimentation.  We will continue the development of the CODI system and hope that our approach inspires other researchers to leverage terminological structure and logical reasoning for ontology matching.

\bibliographystyle{abbrv}
\bibliography{CODI}  

\end{document}
