

\section{Weak Supervision from a Database}

Given a corpus of text, we seek to extract facts about {\em entities},
such as the company {\tt Apple} or the city {\tt Boston}.  
A  {\em ground fact} (or {\em relation instance}), is an expression 
$r(\E)$ where $r$ is a {\rm relation name}, for example {\tt Founded} or {\tt CEO-of}, 
and $\E = e_1, \ldots, e_n$ is a list of entities.

An {\em entity mention} is a contiguous sequence of textual tokens
denoting an entity. In this paper we assume that there is an {\em
  oracle} which can identify all entity mentions in a corpus, but the
oracle doesn't normalize or disambiguate these mentions. We use
$e_i \in E$ to denote both an entity and its name (\ie, the tokens in
its mention).

A {\em relation mention} is a sequence of text (including one or
more entity mentions) which states that some ground fact $r(\E)$ is true.  For
example, ``Steve Ballmer, CEO of Microsoft, spoke recently at CES.''
contains three entity mentions as well as a relation mention for {\tt
  CEO-of(Steve Ballmer, Microsoft)}. In this paper we restrict our
attention to binary relations. Furthermore, we assume that both entity
mentions appear as noun phrases in a single sentence.

% \subsection{Task Definitions}

The task of {\em aggregate extraction} takes two inputs, $\Sigma$, a
set of sentences comprising the corpus, and an
extraction model; as output it should produce a set of ground facts,
$I$, such that each fact $r(\E) \in I$ is expressed somewhere in the
corpus.

{\em Sentential extraction} takes the same input and likewise
produces $I$, but in addition it also produces a function, $\Gamma : I
\rightarrow {\cal P}(\Sigma)$, which identifies, for each $r(\E) \in
I$, the set of sentences in $\Sigma$ that contain a mention describing
$r(\E)$.  In general, the corpus-level extraction problem
is easier, since it need only make aggregate predictions, perhaps
using corpus-wide statistics. In contrast, sentence-level extraction
must justify each extraction with {\em every} sentence which expresses
the fact.

The {\em knowledge-based weakly supervised learning} problem takes as input (1)
$\Sigma$, a training corpus, (2) $E$, a set of entities mentioned in
that corpus, (3) $R$, a set of relation names, and (4), $\Delta$, a
set of ground facts of relations in $R$. As output the learner
produces an extraction model.

%Recall that our definition of relation instance allows the tuples to
%contain (potentially ambiguous) names, rather than requiring unique
%entity identifiers. Thus our definitions require that an extractor
%normalize relations but does not force the same treatment of its
%arguments.
