



\section{Introduction}

Vast quantities of information are encoded on the Web in natural
language. In order to render this information into structured form
for easy analysis, researchers have developed methods for {\em
relation extraction} (RE). The most successful techniques use
supervised machine learning to generate extractors from a training
corpus comprised of sentences which have been labeled with the
arguments of the relations of interest. Unfortunately, these
supervised methods require hundreds or thousands of training
examples per relation, and thus have proven too expensive for use in
constructing Web-scale knowledge bases.

Researchers then introduce ``weak" or ``distant" supervision,
creates its own training data by heuristically matching the contents
of an ontology to text~\citeauthor{craven-ismb99}. For example, if
the tuple \mtt{(Jelani\ Jenkis, Urban\ Meyer)} is an instance of a
relation \mtt{isCoachedBy}. The sentence `` `Our captain, Jelani
Jenkins, is a phenomenal athlete' said the Gator's head coach, Urban
Meyer.'' is a silver training sentences \footnote{They are named
silver because they might contain noise.} for relation
\mtt{isCoachedBy}.


However, weak supervision
~\cite{riedel-ecml10,hoffmann-acl10,hoffmann-acl11} assumes the
target relation is an atomic relation defined already in a large
ontology (\eg\ Freebase, Wikipedia), where thousands of instances
are available for free immediately. Unfortunately, although it is
very convenient for users to define a relation with few labeled
instances, weak supervision cannot directly handle these arbitrary
defined relations because labeled instances are too few to train a
reliable extractor.
%It is an interesting and open question to leverage weak supervision
%to learn the extractors with minimal labeled instances.

\begin{figure}
\vspace*{-2cm}
 \begin{center}
{\resizebox*{3.2in}{!}{\rotatebox{0}
%{\includegraphics{figs/onsmoo_showoff2_silver.pdf}}} }
{\includegraphics{figs/systemoverview2012.pdf}}} }
 \end{center}
\vspace*{-0.2in}
 \caption{System overview} \label{f:systemoverview}
%\vspace*{-0.1in}
\end{figure}

This paper presents \sys, a novel technique called {\em ontological
smoothing}, that learns extractors from a set of minimal labeled
relations by exploiting large ontologies and unlabeled textual
corpus. \sys\ works in four phases as Figure \ref{f:systemoverview}
shows: the first step generates a mapping from the target relation
with labeled instances to the ontology and gets \emph{mapped
relations}. The second step queries the ontology with mapped
relations and get a large amount of \emph{smoothed instances}. The
third step generates silver training sentences by heuristically
matching both labeled and smoothed instances to unlabeled corpus.
Finally, the fourth step learns an extractor.

Figuring out good mapped relations is the key step for ontological
smoothing. It is more complex than simply choosing a mapped relation
which seems most similar to the target relation. One must consider
the huge mapping space formed by operations like union, join and
select. For example, Freebase is a large ontology having
considerable information about athletics. But it is broken down into
separate relations for individual sports and has been normalized in
a manner that precludes simple mapping to \mtt{isCoachedBy}. An
accurate mapping requires choosing from myriad multi-join queries
candidates.
%Secondly, one must jointly map relation, type andentity.
\sys\ uses Markov Logic to perform fast probabilistic inference
during the mapping process, and returns the mapped relation
\footnote{We abbreviated the names of the original Freebase
relations for better readability. $\Join$, $\cup$ and $|$ stands for
join, union and select. The mapped relation is a union of three
joined relations, each is selected by type signature like
footballplayer.} in Figure~\ref{f:systemoverview} for
\mtt{isCoachedBy}.

\sys\ makes the following contributions:
\begin{enumerate}
\item We introduce ontological smoothing, a novel approach for learning
relation extractions with minimal supervision.

\item  Our approach is based on a new ontology-mapping algorithm, which uses
 probabilistic joint inference on schema- and instance-level
features to explore the space of complex mappings defined using
select, join and union operators.

\item experiment contributions
%\item We present experiments on two target ontologies, using Freebase as
%background knowledge,  that demonstrate that ontological smoothing
%  can produce dramatic improvements to both the precision and
%  recall of extraction.
\end{enumerate}


%.\footnote{We abbreviated the names of the original Freebase
%relations for better readability, \eg\ we use basketballPlayer
%instead of /basketball/basketballPlayer. We use superscript indices
%to provide a shorthand for relation names in projections. For
%example 1.name is shorthand for basketballPlayer.name as indicated
%by the superscript on the basketballPlayer relation in the join.}:


%\begin{example}
%\label{e:coach}
%\begin{small}
%\textup{
%\begin{align*}
%&\pi_{\textrm{1.name,2.name}}(\mbox{$\text{baseballPlayer}^1$ $\Join$ baseballCurrentTeam }\\
%&\mbox{\ \ \ \ \ \ $\Join$ baseballRosterPosTeam $\Join$ baseballManager $\Join$ $\text{coach}^2)$}\\
%&\mbox{$\cup$ $\pi_{\text{3.name,4.name}}(\text{footballPlayer}^3$ $\Join$ footballTeam $\Join$ footballRosterPosTeam }\\
%&\mbox{\ \ \ \ \ \ $\Join$ footballTeamHeadCoach $\Join$ $\text{footballCoach}^4)$}\\
%&\mbox{$\cup$ $\pi_{\text{5.name,6.name}}(\text{basketballPlayer}^5$ $\Join$ drafted $\Join$ draftedTeam}\\
%&\mbox{\ \ \ \ \ \ $\Join$ basketballTeamCoach $\Join$ $\text{coach}^6)$}
%\end{align*}
%}
%\end{small}
%\end{example}



%\subsection{Smoothing with KB Weak Supervision}
%
%Once the system has found a mapping from the background ontology,
%\sys\ can generate new candidate tuples for the target relations.
%Continuing the example, suppose that when the view is executed on
%Freebase it returns the fact that ``Will Muschamp'' is the coach of
%``Jeff Demps.'' \sys\ searches an unlabeled corpus of text, such as
%New York Times articles, for sentences containing synonyms for both
%entities. Knowledge-based weak supervision treats each such
%heuristically-labeled sentence as a positive example of the {\tt
%isCoachedBy} relation~\cite{craven-ismb99,hoffmann-acl11}, and \sys\
%learns a CRF extractor with a combination of the original and
%newly-generated training data.
%
%Compared to prior KB weak supervision
%works~\cite{riedel-ecml10,hoffmann-acl10,hoffmann-acl11} focusing on
%atomic relations defined already in a large ontology (\eg\ Freebase,
%Wikipedia), this paper is the first work to extend weak supervision
%to any relation defined in few training examples.
%
%\subsection{Contributions}
%
%The rest of this paper details the process of ontological smoothing,
%but the most attention is given to \sys's method for generating the
%ontological mappings. Section~\ref{s:map} formulates the mapping
%problem as finding the highest probability global mapping between
%all target entities, types and relations and the background
%ontology. Next, section~\ref{s:inf} explains how we compute the best
%mapping using Markov Logic and blocked Gibbs sampling to perform
%fast, approximate probabilistic inference. Section~\ref{s:re}
%summarizes how we use the mapping to generate new examples and
%perform knowledge-based weak supervision. In section~\ref{s:setup}
%and \ref{s:exp} we describe the experimental setup and results.
%Section~\ref{s:related} discusses related work, and
%section~\ref{s:conclude} concludes. Overall, this paper makes the
%following contributions.
%
%\begin{enumerate}
%\item We introduce ontological smoothing, a novel approach for learning
%relation extractions with minimal supervision.
%
%\item  Our approach is based on a new ontology-mapping algorithm,
%   which uses probabilistic joint inference on schema- and instance-level
%   features  to explore the space of complex mappings defined
%   using SQL selection, projection,
%   join and union operators.
%
%\item We present experiments on two target ontologies, using Freebase as
%background knowledge,  that demonstrate that ontological smoothing
%  can produce dramatic improvements to both the precision and
%  recall of extraction.
%\end{enumerate}


  \comment{
   \item We propose a novel approach for learning relation
  extractions with minimal supervision.

   \item \sys\ is the first system to extend knowledge-based weak supervision
   to any relation with minimal supervision.


   \item \sys\ develops the first ontology mapping algorithm for relation
   extraction, which can join the atomic background relations and explores the
   space of views using selection, projection, join and union operators.


  \item  Our approach is based on a new ontology mapping algorithm,
   which explores the space of SQL views using selection, projection,
   join and union operators; it combines schema and instance
   information and uses weighted MaxSAT to jointly maps entities, classes and
   relations between the target and background ontologies.}
