\section{Ontological Smoothing}
\label{s:map}

The key idea of ontological smoothing is that in background ontology
$\mathcal{O}$, there are many relation instances useful to train the
extractor for the target relation $R$. We name them ``smoothed
instances". In order to find them, the core component of \sys\ is to
construct a mapped view. Then, \sys\ queries the ontology, returns
all instances of the view, use them as the smoothed instances to
train extractor for $R$.



\subsection{Problem Definition}
We assume the target relation is defined in terms of unary types
$t$, and binary relations $R$. We express the selectional preference
(i.e. type constraint) of a binary relation by $R(T_1,T_2)$. For
example, \mtt{isCoachedBy(athlete,coach)} is a relation in the NELL
ontology~\cite{carlson-aaai10}. We assume that each target relation
comes with a set of labeled relation instances. An instance is an
entity pair denoted by $(E_1,E_2)$.

Typically the background ontology $\mathcal{O}$ is comprised of many
types and relations and is populated with numerous entities. We
denote them as $t$,$r$,$e$ respectively. Ontology mapping of \sys\
is to find a mapped view over $\mathcal{O}$ for target relation
\mtt{R(T_1,T_2)}.

The major challenges to build a high quality mapping lies in two
aspects. Firstly, \sys\ needs to consider SQL operators join, union
and selects, and the mapped view space would be huge. With respect
to join and union, the example in Figure \ref{f:systemoverview} has
shown their importance. With respect to select, the following
example shows the importance of selection preference:
\begin{align}
\small \tt SELECT\ &\tt e_1,e_2\ FROM\ containedBy\\
\tt SELECT\ &\tt e_1,e_2\ FROM\ containedBy, sportsFacility, city\nonumber\\
&\tt WHERE\ containedBy.e_1=sportsFacility.e\nonumber\\
&\tt AND\ containedBy.e_2=city.e \label{view:stadiumInCity}
\end{align}
The second mapped view is a subset of the first and denote a very
different relation. It is not correct to use the instances of the
first view to train the relation extractor for target relation
\mtt{stadiumInCity}.

The second challenge is, \sys\ needs to jointly map entity, type and
relations. Given seed relation instance \mtt{(Nokia,Finland)} for
relation \mtt{headquaterIn}, \mtt{Nokia} could be a city in
\mtt{Finland} or a company from \mtt{Finland}. Correctly figuring
out the corresponding entity and type of \mtt{Nokia} in
$\mathcal{O}$ would help relation mapping, and vise versa.

Therefore, we formulate ontological smoothing of \sys\ as to find a
mapped view $\cup r(t_1,t_2)$ over $\mathcal{O}$ for $R(T_1,T_2)$.
We emphasize $\cup$ is a union operator; $r$ is created by joining
the binary relations in $\mathcal{O}$; $t_1,t_2$ are select
operations defined by type mapping results of $T_1,T_2$.\footnote{we
will abbreviate the view in Equation \ref{view:stadiumInCity} as
\mtt{containedBy(sportsFacility,city)} in this paper)} If an entity
pair $(e_1,e_2)$ is an instance of the view $\cup r(t_1,t_2)$, it
would be a smoothed instance for $R$.

To find the mapped view, \sys\ use Markov logic, a formalism which
combines the expressiveness of first-order logic with the semantics
of Markov networks~\cite{richardson-domingos06}. Our model generates
a set of first-order logic \emph{rules} with \emph{predicates} and
their negation. \sys\ encodes our interested mapping into following
predicates: \mtt{Mp(r,R),Mp(e,E),Mp(t,T)}. \mtt{Mp(r,R)=1} if the
joined relation \mtt{r} maps\footnote{For $R$, multiple
\mtt{Mp(r,R)} could be true for the purpose of union. The same
definition applies for \mtt{Mp(e,E)} and \mtt{Mp(t,T)}} to the
target relation \mtt{R}. The same definition applies for
\mtt{Mp(e,E)} and \mtt{Mp(t,T)}. The probability of a \emph{state}
(i.e. an assignment to the predicates) is given by
$P(x)=(1/Z)exp(\sum_i w_i n_i(x))$, where $Z$ is a normalization
constant, $w_i$ is the weight of the $i$th rule, $n_i$ if the number
of satisfied grounding of $i$th rule.

%We emphasize for $R$, multiple \mtt{Mp(r,R)} could be true for the
%purpose of union. The same definition applies for \mtt{Mp(e,E)} and
%\mtt{Mp(t,T)}.


\subsection{Generating Candidates}
\label{generate_candidates}

The space of mapped view would be huge with SQL operators join,
union and selects, especially when typical background ontology
contains millions entities. Therefore, narrowing down the mapped
view space is necessary. \sys\ applies following simple, general
rules. These rules are hard which enables \sys\ to filter away
\mtt{r},\mtt{e},\mtt{t} efficiently. With respect to relation:
\begin{align}
\label{eq:candidateR}
&\tt Inst(R,(E_1,E_2))\wedge Inst(r,(e_1,e_2))  \nonumber\\
\wedge & \tt  EqNm(e_1,E_1) \wedge EqNm(e_2,E_2) \Rightarrow Cnddt(r,R)
\end{align}

The intuition of Equation \ref{eq:candidateR} is that the candidate
joined relation must contain at least one instance present in the
target relation. Here \mtt{r} is a mapping candidate for \mtt{R} if
\mtt{Cnddt(r,R)=1}. \mtt{Inst(R,(E_1,E_2))=1} and
\mtt{Inst(r,(e_1,e_2))=1} indicate the entity pairs \mtt{(E_1,E_2)}
and \mtt{(e_1,e_2)} are relation instances of \mtt{R} and \mtt{r}
respectively. \mtt{EqNm(e,E)} indicates the names or alias of
\mtt{e} and \mtt{E} are equal.

With respect to entity, we consider all entities in $\mathcal{O}$
having equal name or alias, i.e. $\tt EqNm(e,E)\Rightarrow
Cnddt(e,E)$.

With respect to type, we consider the candidates based on entity
candidates:
\begin{equation} \tt Cnddt(e,E)\wedge Tp(e,t) \wedge
Tp(E,T) \Rightarrow Cnddt(t,T)\end{equation}
where \mtt{Tp(e,t)} and
\mtt{Tp(E,T)} indicate the entity types of \mtt{e} and \mtt{E} are
\mtt{t} and \mtt{T} in target and background ontology respectively.


\subsection{Constructing Mapped View Jointly}
Markov Logic enables us to jointly map type, relation and entity.
\comment{Our model deviates from ...}

The above section narrows down the mapped views into a set of
candidates. It can be expressed in hard rules:
\begin{equation}
\tt Mp(r,R) \Rightarrow Cnddt(r,R)
\end{equation}
where \mtt{Mp(r,R)} is true iff \mtt{r} maps to \mtt{R}. Similar
rules apply to \mtt{t} and \mtt{e}.

\emph{Relation instance constraints:} \sys\ expects the seed
instances of the target relation \mtt{R} are also the relation
instances of the joined relation \mtt{r} from $\mathcal{O}$, under
the condition that entity mappings agree:
\begin{align}
\label{eq:naiverelins}
&\tt Inst(R,(E_1,E_2))\wedge Inst(r,(e_1,e_2))  \nonumber\\
\wedge & \tt  Mp(e_1,E_1) \wedge Mp(e_2,E_2) \Rightarrow Mp(r,R)
\end{align}
Compare to Equation \ref{eq:candidateR}, this is a joint rule
because \mtt{Mp(e,E)} are predicates whose values should be
inferred. In practice, when \mtt{Inst(R,(E_1,E_2))\wedge
Inst(r,(e_1,e_2))=1}, \mtt{Mp(r,R)=0} would suggest one of
\mtt{Mp(e_1,E_1)}, \mtt{Mp(e_2,E_2)} to be false. This implication
is wrong because relations could overlap. Therefore, we refine the
intuitive rule as:
\begin{align}
\label{eq_relationinstance}
&\tt Mp(e_1,E_1) \wedge Mp(e_2,E_2) \wedge Inst(R,(E_1,E_2))\nonumber\\
\wedge & \tt \left(\vee_{k=1}^{K} Inst(r_k,(e_1,e_2))\right) \wedge \left(\vee_{k=1}^{K} Mp(r_k,R)\right)
\end{align}

We replace $\Rightarrow$ with $\wedge$ to avoid negative evidence of
relation mapping flows to entity mapping. We use disjunction $\vee$
among \mtt{Mp(r_k,R)} to handle overlapped relations. Equation
\ref{eq_relationinstance} is not symmetric between \mtt{r} and
\mtt{R}. It is because usually the target ontology is small and its
relations will not overlap.

\emph{Negative instance constraints:} Some ontologies embody the
closed-world assumption or otherwise present negative examples and
these can be powerful. If such a negative example is actually
presenting in \mtt{r}, then it is unlikely that \mtt{Mp(r,R)=1}. We
express it in hard rule:
\begin{align}
&\tt Inst(r,(e_1,e_2)) \wedge \neg Inst(R,(E_1,E_2))\nonumber\\
\wedge & \tt Mp(e_1,E_1) \wedge Mp(e_2,E_2) \Rightarrow \neg Mp(r,R)
\end{align}
Unlike the Equation \ref{eq_relationinstance}, we use $\Rightarrow$
because when \mtt{Mp(r,R)}, \mtt{Inst(r,(e_1,e_2))} but \mtt{\neg
Inst(R,(E_1,E_2)}, it is very suspicious that the entity mapping is
correct.

\emph{Type Constraints: }Suppose the type of \mtt{Nokia} maps to
\mtt{businessOperation} in $\mathcal{O}$, we tend to map it to an
\mtt{Nokia Corp.}, rather than an Finnish city. The soft rule is:
\begin{align}
\tiny\tt Mp(e,E)\wedge Tp(E,T) \wedge \left(\vee_{k=1}^K Tp(e,t_k)\right) \wedge \left(\vee_{k=1}^K Mp(t_k,T)\right)\nonumber
\end{align}
The explanation of this rule is comparable to Equation
\ref{eq_relationinstance}.

\emph{Length of Join: }While joining binary relations over the
background ontology greatly extends the representational ability of
the views, it may also add noise. We add the soft rule $\tt
short(r)\Rightarrow Mp(r,R)$ to prefer short joined relations.

\emph{Mutual Exclusion: }We assume little duplication among entities
in $\mathcal{O}$. We can express it by hard rule $\tt
Mp(e,E)\Rightarrow \neg Mp(e^\prime, E)$.

\emph{Regularization: }According to Ockham Razor principle, \sys\
should try to avoid predictions being true with very weak evidence.
We add soft rules for type and relation mappings: $\tt \neg Mp(t,T)$
and $\tt \neg Mp(r,R)$. With respect to entity mapping, mutual
exclusive rule ensures regularization.

\subsection{Maximum a Posteriori Inference}
%\section{Inference: Blocked Gibbs Sampling}
\label{s:inf}

We note that computing $\argmax_{\mathbf{x}}
P(\mathbf{X}=\mathbf{x})$ is challenging (1) There exist thousands
of grounded predicates, making exact inference intractable. (2) The
dependencies represented by our rules break the joint distribution
into islands of high-probability states with no paths between them.

For tractability we turn to approximate inference by relaxing the
problem into linear programming (i.e. LP-relaxation). Firstly, every
grounding of the rule is converted into conjunctive normal form,
denoted as $CNF_i=\wedge c_j$, where $c_j$ is a clause. Let $c_j^+$
and $c_j^-$ be the set of indices of the variables that appear in
the positive and negative form in clause $c_j$, and let $H$ be the
set of indices of hard rules. The inference problem can be relaxed
as:
\begin{align}
\max & \displaystyle{\sum} w_i z_i\label{eq_lp_sumweight}\\[-1ex]
s.t. & \displaystyle{\sum}_{k\in C_j^+}y_k + \displaystyle{\sum}_{k\in C_j^-}(1-y_k) \geq 1,\ i\in H\label{eq_lp_hard}\\[-1ex]
& \displaystyle{\sum}_{k\in C_j^+}y_k + \displaystyle{\sum}_{k\in C_j^-}(1-y_k) \geq z_i,\ i\not\in H\label{eq_lp_soft}\\[-1ex]
& y_k,z_i \in [0,1]\nonumber
\end{align}
where $y_k$ indicates the truth assignment of the predicate, and
$z_i$ indicates whether one rule is satisfied. Equation
\ref{eq_lp_hard} ensures hard rules to be satisfied, and Equation
\ref{eq_lp_soft} allows soft rules to be broken but $z_i$ will get
smaller value then. Theoretically, when $w_i=1$, the LP-relaxation
is $3/4$-approximation algorithm.
