\section{Ontological Smoothing}
\label{s:map}

Mapping between ontologies has been studied extensively by
several different communities, often using
different terminology, \eg, schema mapping~\cite{Rahm01asurvey} in
data management and ontology mapping in the semantic
web~\cite{mitranj05}. While the literature is too vast for a
comprehensive survey, the next subsection provides the context for
our work. Then, the remaining subsections describe our approach.
%Section 2.2 defines the class of mappings considered. Section 2.3
%describes a heuristic filter used to reduce the size of the search
%space, and Section 2.4 introduces a probabilistic model which allows
%us to choose the best mapping.

\subsection{Context and Previous Work}
\label{s:ontomap}

Dhamankar et al.\cite{Dhamankar04imap:discovering} define schema
{\em matching} to be the first step in the process of constructing a
{\em mapping}, \ie\ a function converting descriptions of objects in
one ontology into corresponding descriptions in another. We consider
ontologies comprised of {\em types} (unary relations, also known as
concepts, organized in a taxonomy) and binary {\em relations}.
Relations may connect two types (\eg, {\em Parent})
 or may link a type to a primitive value, such as numbers, dates and
strings (\eg, {\em BirthDate}), which are often called {\em
attributes} or {\em properties}. Each type is associated with a set
of instances, called {\em entities}.

A {\em mapping} from a background ontology \B{\cal O} onto a target \T{\cal O}
is a set of partial functions whose ranges are entities, types and relations
in \T{\cal O}. Ullman~\cite{ullman-icdt97} noted that these mappings can be
thought of as view definitions, \eg\ defined using SQL operations such as
selection, projection, join and union. We adopt this perspective as shown in
Example~\ref{e:coach}.

\begin{figure}[t]
\begin{center}
\includegraphics[width=3.2in]{figs/relatedwork}
\end{center}
\caption{Classification of selected ontology matching systems, based on \cite{euzenat2007b}.}
\label{fig:examplealgorithms}
\end{figure}

Euzenat \&\ Shvaiko~\cite{euzenat2007b} and Rahm
\&\ Bernstein~\cite{Rahm01asurvey} carve the set of approaches for ontology
matching into several dimensions. The input of the matching algorithm can
be {\em schema-based}, {\em instance-based} or {\em mixed}. The output can
be an {\em alignment} (\ie, a one-to-one function between objects in the
two ontologies) or a {\em complex mapping} (\eg, defined as a view).
Figure~\ref{fig:examplealgorithms} plots some previous methods along these
dimensions.

The majority of existing systems focus on the alignment problem.
Doan \etal~\cite{doan-www02} present GLUE, which casts alignment of
two taxonomies into classification and uses learning techniques.
The more recent system by Wick \& McCallum~\cite{wick-kdd08} applies
a learning approach to a single probabilistic model that considers
all matching decisions jointly.
While these system operate on instances, others align schemas:
Cupid~\cite{Madhavan01genericschema} matches tree-structures in
three phases, that include linguistic matching, structural matching,
and aggregation.
COMA++\cite{Aumueller05schemaand} enables parallel composition of
matching algorithms. Niepert \etal~\cite{niepert-aaai10} propose a
joint probabilistic model based on Markov logic.
QOM~\cite{Ehrig04qom} matches both, instances and schemas, and is
able to trade off between efficiency and quality.

Far less work has looked at finding complex mappings between ontologies.
Artemis~\cite{castano:artemis:} creates global views using hierarchical
clustering of database schema elements. MapOnto~\cite{An06discoveringthe}
produces mapping rules between two schemas expressed as Horn clauses.
Miller \etal's tool Clio~\cite{miller-vldb00}\cite{Miller01theclio}
generates complex SQL queries as mappings, and ranks these
by heuristics.

For ontological smoothing to work, it is essential that one can find
complex mappings involving selections, projections, joins, and unions.
While MapOnto and Clio handle complex mappings, they are semi-automatic tools
that depend on user guidance. In contrast, we designed \sys\ to be
fully autonomous. Unlike the other two, \sys\ uses a propabilistic
representation and performs joint inference to find the best mapping.


\subsection{Ontologies and Views}

We assume that an ontology is defined in terms of unary types, $t$,
and binary relations, $r$. We denote the selectional preference
(type constraint) of a relation by writing $r(t_1,t_2)$. For
example, {\tt isCoachedBy(athlete,coach)} is a relation in the Nell
ontology~\cite{carlson-aaai10}. We assume that each target relation
comes with a set of seed instances \set{\ldots (e_1, e_2)\ldots},
where $(e_1,e_2)$ is an entity pair.

Typically the background ontology is comprised of many types and
relations and is populated with numerous entities. There are also
many relations defined on the entities. As we mentioned in the
introduction, in order to be able to find a quality mapping for the
target relation, one must consider the large space of mappings
formed by the set of database {\em views} defined using the
relational operations of select, project, join, and union. When
mapping into a unary type of the target ontology we consider views
comprised of unions over the background types.

When constructing a mapping to a target relation, it is often useful
to select a subset of a background relation or view.  One way that
\sys\ performs selection  is by adding types into the join. Consider the
following views:
\begin{example}
\label{e:facility}
\begin{small}
\begin{align*}
\pi_{\text{1.name,2.name}}& \text{\ \ \
$\text{location}^1$ $\Join $containedBy $\Join$ $\text{location}^2$}\\
\pi_{\text{city.name,country.name}} & \text{\ \ \ city $\Join$ containedBy $\Join$ country}\\
\pi_{\text{sportsFacility.name,city.name}} & \text{\ \ \
sportsFacility $\Join$ containedBy $\Join$ city}
\end{align*}
\end{small}
\end{example}

Note that the second and the third views are a subset of the first
and denote very different relations. It is not correct to use the
instances of the first view to train the relation extractor for
target relation \emph{cityLocateInCountry}.

Formally, we represent an ontology mapping between target relation
$\tg{r}(\tg{t}_1,\tg{t}_2)$, and the background view defined as
$\cup r(t_1,t_2)$, where $\bg{r}$ is created by joining the binary
relations in the background ontology (e.g. basketballPlayer
$\Join$ basketballRosterPosition $\Join$
basketballTeam
$\Join_{\text{coach}}$ headCoach). $\bg{t}_1$ and $\bg{t}_2$ are
%relations in the background ontology (e.g. basketballPlayer
%$\Join_{\text{player}}$ basketballRosterPosition $\Join_{\text{team}}$
%basketballTeam
%$\Join_{\text{coach}}$ headCoach). $\bg{t}_1$ and $\bg{t}_2$ are
selection operations defined by corresponding entity types. Without
explicit explanation, we will from now on use \tg{r}, \tg{t}, \tg{e}
to denote target relations etc, and \bg{r}, \bg{t}, \bg{e} to denote
ones in the background ontology.

Since a database view is equivalent to a query, we may apply it to
the ground instances of the background ontology to return a new
table (denoted ``{\em the instances of the ontology view}");
assuming the mapping is good, these new entity pairs will be good
training instances for the target relation.


\subsection{Filtering the Set of Candidates}
\label{generate_candidates}

Large ontologies may contain millions entities belonging to thousands of
types and participating in hundreds of relations. Even if we bound the
number of joins allowed in view definitions, there are still too many
potential mappings for \sys\ to consider them all. Therefore, some
filtering is required to narrow down the search space.

An obvious criterion is that the candidate joined relation must
contain at least one ground instance present in the corresponding
target relation. For the moment, assume we have already solved the
entity mapping problem (\ie, we know that entity \tg{e} in the
target ontology corresponds to \bg{e} in the background ontology).
We can think of the background ontology as a graph, where entities
are nodes and binary relations are edges. If background entity pair
$(\bg{e}_1, \bg{e}_2)$ maps $(\tg{e}_1,\tg{e}_2)$ of the target
relation, we can return all paths between $\bg{e}_1$ and
$\bg{e}_2$.\footnote{These paths may be easily generated using
breadth first search.}  Suppose joined relation $\bg{r}=r_1 \Join
r_2 \Join \ldots \Join r_k$ is one path, then $r$ is a potential
mapping to $\tg{r}$.


However, the entity mapping problem itself is hard. For example,
suppose that (\emph{Kobe Bryant, Phil Jackson}) is an instance of
the target relation {\tt isCoachedBy}, and let Freebase be the
background ontology. There are more than 10 people named \emph{Phil
Jackson} in Freebase, including a football player, an author and a
film crewmember.

Human beings can easily pick up the basketball coach among these 10
people because the environment (\eg\ Kobe Bryant, coach)
disambiguates him.
% Phil Jackson involves in the relation
%\emph{athleteCoach}; he was an basketball player; and appears
%together with \texttt{Kobe Bryant}.
We can encode this intuition into our automated mapping algorithm by
solving the entity mapping problem jointly with relation mapping.

For training instance $(\tg{e}_1,\tg{e}_2)$ of the target relation
$\tg{r}(\tg{t}_1,\tg{t}_2)$, we create

\begin{itemize}
  \item {\em Entity Mapping Candidates:} we return two sets of entities
$E_1=\{\ldots,\bg{e}_{1,i},\ldots\}$ and
$E_2=\{\ldots,\bg{e}_{2,j},\ldots\}$ found in background ontology by
using IR search techniques on the names of
$\tg{e}_1$ and $\tg{e}_2$.\footnote{For Freebase, one can find these
  entities with its search
API http://api.freebase.com/api/service/search?query=string}.

  \item {\em Type Mapping Candidates:} the background types of elements of
    $E_1$ and $E_2$ are type mapping candidates for $\tg{t}_1$ and $\tg{t}_2$,
    respectively.

  \item {\em Relation Mapping Candidates:} $\bg{r}$ is the candidate for
$\tg{r}$ if $(\bg{e}_1^*,\bg{e}_2^*)\in \bg{r}$ where
$\bg{e}_1^* \in E_1$ and $\bg{e}_2^* \in E_2$.

%\Bug{why do both args have to
%    be in r?} I THINK THIS IS CORRECT

%$\bg{e}_1^* \in \{\bg{e}_1\}$ and $\bg{e}_2^* \in \{\bg{e}_2\}$.
\end{itemize}

%\Bug{but this sounds like a pipeline, not joint (before version)}

% to first search two sets of entities $\{e^1_b\}$ and
%$\{e^2_b\}$ by entities' names\footnote{For Freebase entities, one
%can use its searching API http://api.freebase.com/api/} as entity
%matching candidates; the types of $\{e^1_b\}$ and $\{e^2_b\}$ are
%type matching candidates for the types of $e^1$ and $e^2$
%respectively. We write them as $t(e^1)$ and $t(e^2)$; a joined
%relation $r_b$ is the matching candidate for $r$, if $r_b$ contains
%an instance $(e^{1*}_b,e^{2*}_b)$ where $e^{1*}_b \in \{e^1_b\}$ and
%$e^{2*}_b \in \{e^2_b\}$, we write such instantiation as
%$r_b(e^{1*}_b,e^{2*}_b)$.

%One may notice some joined relation may bring extreme large join.
%For example by joining two relations ``{\em personBornInCountry}"
%and ``{\em countryHasCity}", the person argument will relate to all
%cities in his motherland. Such extremely large join is useless in
%relation extraction, and hence we will filter them away.
%\footnote{The heuristic in this paper is: suppose $(e_1,e_2)$ are
%instances of the joined relation $r$, and having the same ``middle"
%entity $e_0$. If $|\{e_1\}|>100$ and $|\{e_1\}|>100$, $r$ will be
%filtered away.}

\subsection{Constructing Mappings Jointly}

In order to define a probability distribution over the space of
possible joint mappings, we use Markov logic, a formalism which
combines the expressiveness of first-order logic with the semantics
of Markov networks~\cite{richardson-domingos06}.  Our model
generates a set of rules with \emph{predicates} and their negation.
Each rule has some weight. \sys\ encodes our interested mapping
candidates into following predicates:
\[\mtb{r},\mtb{e},\mtb{t}\]
\mtb{r} is true if the joined relation \bg{r} maps to the target
relation \tg{r}. The same definition applies for \mtb{e} and \mtb{t}.
%\begin{itemize}
%  \item $\mtb{r}$: true if the relation \tg{r} in the target ontology maps to the joined
%  relation \bg{r} in the background ontology.
%  \item $\mtb{e}$: true if the entity $e$ in the target ontology
%  into the entity $e_b$ in the background ontology.
%  \item $\mtb{t}$: true if the entity type $t$ in the target ontology can
%  be matched to type $t_b$ in the background ontology.
%  \comment{in our implementation we only consider views comprised of
%      unions of unary relations and I guess only distinguished types}
%\end{itemize}
\comment{For example, suppose the target relation \tg{r} has $k$
candidates $\bg{r}_1,\bg{r}_2\ldots \bg{r}_k$, the assignment says
only $\match{\tg{r}}{\bg{r}_\alpha}$ and
$\match{\tg{r}}{\bg{r}_\beta}$ are true, then we explain the
assignment into mapping result as $r$ into the union of background
relations.}

An assignment to all predicates \mtb{r}, \mtb{t} and \mtb{e}
represents a possible ontology mapping result. By finding the best
assignment that maximizes the weighted sum of all satisfied rules,
we can create views for $\tg{r}(\tg{t}_1,\tg{t}_2)$ (\ie, the target
relation and its type constraints) as the union query:
\begin{align}
\cup_{\bg{r},\bg{t}_1,\bg{t}_2}\pi_{\text{$\bg{t}_1$.name,$\bg{t}_2$.name}}\text{\
\ \ }\bg{r}
\end{align}

\noindent where $\match{\tg{t}_1}{\bg{t}_1}$,  $\match{\tg{t}_2}{\bg{t}_2}$
and $\mtb{r}$ are true in the assignment.

Formally, let variables $\mathbf{X}$ represent truth assignments to
all grounded predicates. We then model the joint probability
distribution as
\begin{equation}
P(\mathbf{X}=\mathbf{x}) = \frac{1}{Z} \exp \left(\sum_i w_i
n_i(\mathbf{x})\right)
\end{equation}
 where $i$ ranges over our set of rules, $w_i$
denotes the weight of rule $i$, and $n_i(\mathbf{x})$ the number of
true groundings of $i$ under assignment $\mathbf{x}$, $Z$ is the
partition function $Z=\sum _{\mathbf{x}}\left(\sum_i w_i
n_i(\mathbf{x})\right)$. Our goal is to find the MAP solution
$\argmax_{\mathbf{x}} P(\mathbf{X}=\mathbf{x})$.

We introduce our Markov logic rules below, and in Section
\ref{s:inf} we present an inference algorithm to compute
$\argmax_{\mathbf{x}} P(\mathbf{X}=\mathbf{x})$.

\comment{do we want to do this (triple)?  Check Halevy survey.  Also
need to discuss GAV, LAV and GLAV}

%We introduce our Markov logic rules below.

\subsubsection{Rules in Markov Logic}
%We are discussing the evidences that make a relation matching $r
%\sim r_b$ good or bad.

%%n this section, we encode the joint evidences for ontology mappings
%into Markov Logic.

\textbf{Name similarity: }Related relations in target and background
ontologies may have similar names. We compare $\tg{r}$ and $\bg{r}$
by first tokenizing the relation names (splitting on camel-case
transitions and punctuation), and then checking if they have some
words $w$ in common. Formally, we encode this as a rule
\begin{align}
&hasWord(\tg{r},w) \wedge hasWord(\bg{r},w) \nonumber\\
\wedge& candidate(\tg{r},\bg{r})\Rightarrow \mtb{r}
\end{align}
$hasWord(\tg{r},w)$, $hasWord(\bg{r},w)$ are variables being true
when word $w$ appears in the surface string of $\tg{r}$ or $\bg{r}$
respectively. For example, word $w=$``coach'' appears in both
$\tg{r}:${\tt isCoachedBy} and $\bg{r}:\emph{BasketballTeamHeadCoach}$,
it suggests the later might be a good mapping.

For simplicity, we will from now on assume that the predicate
$candidate(\tg{r},\bg{r})$ is implicitly added as a pre-condition to
every rule.

The name similarity rule can be applied to type and entity in the
same way. Formally, they are
\begin{align}
&hasWord(\tg{e},w) \wedge hasWord(\bg{e},w)\Rightarrow \mtb{e}\\
&hasWord(\tg{t},w) \wedge hasWord(\bg{t},w)\Rightarrow \mtb{t}
\end{align}


\textbf{Relation instance constraints: } The goal of \sys\ is to
create queries in the target ontology that contains entity pairs as
good training instances to train the target relation's extractor. So
it is nature to expect the training instances of the target relation
\tg{r} are also the instances of the background joined relation
\bg{r}.

We write the above intuition as
\begin{align}
\label{eq:naiverelins}
&\isttg{e}{r} \wedge \istbg{e}{r} \wedge  \match{\tg{e}_1}{\bg{e}_1} \wedge \match{\tg{e}_2}{\bg{e}_2}\nonumber\\
%&\tg{r}(\tg{e}_1, \tg{e}_2) \wedge \bg{r}(\bg{e}_1, \bg{e}_2) \wedge  \match{\tg{e}_1}{\bg{e}_1} \wedge \match{\tg{e}_2}{\bg{e}_2}\\
\Rightarrow & \mtb{r}
\end{align}
where \isttg{e}{r} is true when $(\tg{e}_1, \tg{e}_2)$ is an
instances of the target relation $\tg{r}$. Same notation is used in
\istbg{e}{r}. By adding $\match{\tg{e}_1}{\bg{e}_1}$ and
$\match{\tg{e}_2}{\bg{e}_2}$, we ensure $\tg{r}$ and $\bg{r}$ are
dealing with the same entity pair.

Sometimes the above intuitive rule suffers from a problems. Suppose
\isttg{e}{r} and \istbg{e}{r} is
true, the conjunctive normal form of the Equation
\ref{eq:naiverelins} is
\begin{align}
\notmtbs{e}{1} \vee \notmtbs{e}{2} \vee  \mtb{r}
\end{align}

which means when \mtb{r} is false, the system tend to set at least
one of $\match{\tg{e}_1}{\bg{e}_1}$ and $\match{\tg{e}_2}{\bg{e}_2}$
to be false. This is problematic. For example, when $\tg{r}$ is
\emph{stateHasCapital} and $\bg{r}$ is \emph{locationContains};
$\tg{e}_1$ and $\bg{e}_1$ are \emph{Massachusetts} in the target and
background ontology; $\tg{e}_2$ and $\bg{e}_2$ are \emph{Boston}.
(i.e. entity mapping is correct). Then, although \mtb{r} is false,
it is not correct to suggest $\notmatch{\tg{e}_1}{\bg{e}_1}$ or
$\notmatch{\tg{e}_2}{\bg{e}_2}$

Another weakness of the intuitive rule is that it does not handle
overlapping relations. Suppose two joined relations $\bg{r}_1,\bg{r}_2$
contains one training instance of $\tg{r}$, the rule will lead to 2
formulas suggesting us to map $r$ into them both. Overlapped mapping
results will not help relation extraction, and moreover, these
formulas overemphasize one instance against other instances covered
by fewer background relations.

With the above observations, we refine the intuitive rule into the
following one:
\begin{align}
&  \mtbs{e}{1} \wedge \mtbs{e}{2} \wedge \isttg{e}{r} \nonumber\\
\wedge & \istbg{e}{r_1} \wedge \istbg{e}{r_2} \ldots \wedge \istbg{e}{r_k}\nonumber\\
\wedge & \left(\match{\tg{r}}{\bg{r}_1} \vee \match{\tg{r}}{\bg{r}_2} \ldots \vee \match{\tg{r}}{\bg{r}_k}\right) \label{eq_relationinstance}
\end{align}

We replace $\Rightarrow$ with $\wedge$ to avoid negative evidence on
relation mapping force entity mapping. We use disjunction $\vee$
among $\match{\tg{r}}{\bg{r}_i}$ to avoid overlapping or
overemphasizing the instance $(\tg{e}_1, \tg{e}_2)$.

One may wonder why Equation \ref{eq_relationinstance} is not
symmetric between $\tg{r}$ and $\bg{r}$. It is because usually the
target ontology is small and its relations will not overlap, i.e.
$(\tg{e}_1,\tg{e}_2)$ will not be an instance of $\tg{r}$ and
$\tg{r}^\prime$ that $\tg{r}\neq \tg{r}^\prime$.


\textbf{Length of join: } While joining binary relations over the background
ontology greatly extends the representational ability of the views,
it may also add noise. The following rule encodes a preference for views
with fewer joins.
\begin{align}
\label{eq_lengthJoin}
short(\bg{r}) \Rightarrow \mtb{r}
\end{align}

%For example, in Freebase we can define $short(\bg{r})$ to be true if
%there is \istbg{e}{r}, and $e_2$ is explicitly displayed in $e_1$'s
%web page or vise versa.

%\Bug{Huh?  This seems to have nothing to do with
%  being short???}

\textbf{Negative instances: }While many relations only contain positive
examples, some ontologies embody the closed-world assumption or otherwise
present negative examples and these can be powerful.
%For example, when \emph{(Kobe Bryant,
%Phil Jackson)} and \emph{(LeBron James, Erik Spoelstra)} are
%training instances of the relation {\tt isCoachedBy}, it is very likely
%that \emph{(LeBron James, Phil Jackson)} is a negative instance.
%Users can check and confirm it quickly.

If such a negative target example is actully present in a background view
$\bg{r}$, then it is unlikely that $\mtb{r}$ corresponds to the target.
\begin{align}
\label{eq:negativeinstances}
&\nisttg{e}{r} \wedge \istbg{e}{r} \wedge  \mtbs{e}{1} \wedge \mtbs{e}{2}\nonumber\\
\Rightarrow & \notmtb{r}
\end{align}


%\comment{The notation $\notmtb{r}$ is unclear, since it looks like
%NOT r-hat {\em is} matching r.  Instead, can I suggest that one use
%a symbol where the
%  congruent equality has a slash through it?  Needs a global change}
%where $\nisttg{e}{r}$ denotes the entity pair is a negative instance
%of the target relation $\tg{r}$.

Unlike the Equation \ref{eq_relationinstance}, we use $\Rightarrow$
because when \mtb{r}, \istbg{e}{r} but \nisttg{e}{r},
it is very suspicious that the entity mapping is good, i.e. one of
\mtbs{e}{1} and \mtbs{e}{2} could be false.


%We discuss rules for entity matching separately to make easy
%understanding. But actually, \sys's joint model make entity matching
%results influenced by all logic formals. As our discussion on Eq.
%\ref{eq_relationinstance}, relation matching results influence
%entity matching results, and vise versa.

%\textbf{Entity name similarity: }Intuitively, comparing the name of
%the target entity and background entity is important for entity
%matching. When $e$ match $e_b$, we expect some words appears in
%their name simultaneously. The rule is
%
%\begin{align*}
%&hasWord(e,w) \wedge hasWord(e_b,w) \Rightarrow e \sim e_b
%\end{align*}

\textbf{Rank in search: }If the background ontology provides an entity
search engine, the higher ranked returned entities are more likely
to be good mappings. The rule can be written as
\begin{align}
topSearch(\tg{e},\bg{e})  \Rightarrow \mtb{e}
\end{align}
$topSearch(\tg{e},\bg{e})$ means $\bg{e}$ is the top returned
entities by querying $\tg{e}$ on the search engine.

\textbf{Type constraints: }Suppose we already know the type of
\emph{Phil Jackson} (e.g. {\tt coach}) maps to
\emph{basketballCoach} in the background ontology, we have more
confidence to map \emph{Phil Jackson} to an \emph{basketballCoach}
entity, rather than an \emph{author} entity, even if two entities
have the same name. Formally, the rule could be:
\begin{align}
\label{eq_entitytype}
 \tisttg{e}{t} \wedge  \tistbg{e}{t} \wedge \mtb{e}  \wedge \mtb{t}
\end{align}
We use \tisttg{e}{t} to denote the type of \tg{e} is \tg{t}, and
$\tistbg{e}{t}$ to denote the type of $\bg{e}$ is $\bg{t}$. The
above rule says we gain the weight when entity mapping $\mtb{e}$ and
type mapping $\mtb{t}$ is consistent.

When the type of $\bg{e}$ and $\tg{e}$ are unique in the ontologies,
Equation \ref{eq_entitytype} is good enough. But suppose the
background entity $\bg{e}$ has several type signatures, we should
refine the rule into
\begin{align}
\label{eq_entitytype2}
 & \mtb{e} \wedge  \tisttg{e}{t} \wedge \tistbg{e}{t_1} \wedge \tistbg{e}{t_2} \ldots \wedge \tistbg{e}{t_k}\nonumber\\
 \wedge &\left( \match{\tg{t}}{\bg{t}_1} \vee \match{\tg{t}}{\bg{t}_2} \ldots \vee \match{\tg{t}}{\bg{t}_k}\right)
\end{align}

%\textbf{Entity constraints on Types: } one could also expect entity
%matching results to suggest type matching results.
%\begin{align*}
%t(e) \wedge t_b(e_b) \wedge e \sim e_b \Rightarrow t \sim t_b
%\end{align*}
%
%The rule is good enough if $t(e)$ and $t(e_b)$ are unique.


\textbf{Mutual exclusion: }For entity mapping, one can assume an
entity in the target ontology maps to only one entity in the
background ontology. We encode this as
\begin{align}
\label{rule:mutualexclusion}
\mtb{e} \wedge \bg{e} \neq \bg{e}^\prime \Rightarrow
\notmatch{\tg{e}}{\bg{e}^\prime}
\end{align}

For type mapping and relation mapping, there is no explicit mutual
exclusion because \sys\ may map these into a union of background
types and relations. But one must notice there are already implicit
mutual exclusions in Equation \ref{eq_relationinstance} and Equation
\ref{eq_entitytype2}. They help \sys\ to avoid heavily overlapped
mappings.


%\textbf{Type name Similarity: }Similar to entity matching and relation matching, \sys prefers background type having similar name with the target type. The rule is
%\begin{align*}
%&hasWord(t,w) \wedge hasWord(t_b,w) \Rightarrow t \sim t_b
%\end{align*}

\textbf{Ockham Razor: }In practice, we prefer the assignment making
the fewest number of predictions. We try to avoid predictions being
true only with very weak evidence. For this purpose, \sys\ adds
rules with one negation predicate $\notmtb{r}$ for relation
candidate, $\notmtb{e}$, $\notmtb{t}$ for entity and type candidate.

%\subsubsection{Weight}
%%In future work, we would like to automatically learn weights.
%\sys\ aims to design a general ontology mapping algorithm. Therefore, it sets
%weights as simple as possible to increase its flexibility. In this paper, for
%negative evidence rule in Equation \ref{eq:negativeinstances}, we
%set the weight to be 100 to make the rule ``hard". For all other
%rules, we simply set the weight to be 1.
