\section{Relation Extraction}
\label{s:re}

After mapping the target ontology into the background knowledge,
\sys\ applies knowledge-based weak supervision~\cite{hoffmann-acl11}
to heuristically match both the seed instances and the larger number
of instances of the mapped relations, to corresponding text.  For
example, suppose that $r(e_1, e_2) = \texttt{Founded}(\texttt{Jobs},
\texttt{Apple})$ is a ground tuple and $s=$``Steve Jobs founded
Apple, Inc.'' is a sentence containing synonyms for both
$e_1=\texttt{Jobs}$ and $e_2=\texttt{Apple}$, then $s$ may be a
natural language expression of the fact that $(e_1, e_2)\in r$ holds
and could be a useful training example.

Unfortunately, this heuristic can often lead to noisy data and poor
extraction performance. To fix this problem, Riedel et
al.~\cite{riedel-ecml10} cast weak supervision as a form of
multi-instance learning, assuming only that {\em at least one} of
the sentences containing $e_1$ and $e_2$ are expressing $(e_1, e_2)
\in r$.

In our work, we use the publicly-available MultiR
system~\cite{hoffmann-acl11} which generalizes Riedel \etal's method with a
faster model that also allows relations to overlap. For
example, MultiR can consistently handle the fact that {\tt Founded(Jobs,
  Apple)} and \\{\tt CEO-of(Jobs, Apple)} are both true.  MultiR uses a
probabilistic, graphical model that combines a sentence-level extraction
component with a simple, corpus-level component for aggregating the
individual facts.

MultiR's extraction decisions are almost entirely driven by sentence-level
reasoning. However, by defining random aggregate-level variables $Y$ for
individual facts and tying them to the sentence-level variables $Z$ for
extractions, a direct method for modeling weak supervision is provided.
The model is trained, so that the aggregate variables $Y$ match the facts
in the database, treating the sentence-level variables $Z$ as hidden
variables that can take any value, as long as they produce the correct
aggregate predictions.

During learning, MultiR uses a Perceptron-style additive parameter
update scheme which has been modified to reason about hidden
variables, similar in style to the approaches of \cite{zettlemoyer2007online,liang06discriminative}.
To support learning, MultiR performs a greedy approximation to
a weighted, edge-cover problem for inference.
