\begin{figure}[t]
\algsetup{indent=1.5em}
\begin{algorithmic}
\STATE {\bf Inputs:} \\
(1) $\Sigma$, a set of sentences, \\
(2) $E$, a set of entities mentioned in the sentences,\\
(3) $R$, a set of relation names, and \\
(4) $\Delta$, a database of atomic facts of the form $r(e_1, e_2)$
for $r\in R$ and $e_i\in E$.\\[5pt]
\STATE {\bf Definitions:}\\ We define the training set
$\{(\mathbf{x}_i, \mathbf{y}_i) | i = 1 \ldots n\}$, where $i$ is an
index corresponding to a particular entity pair $(e_j,e_k)$ in $\Delta$,
$\mathbf{x}_i$ contains all of the sentences in $\Sigma$ with mentions of this
pair, and $\mathbf{y}_i= ~${\bf relVector}($e_j, e_k$).\\[5pt]
\STATE{\bf Computation:}
\STATE {\bf initialize} parameter vector $\mathbf{\Theta} \gets \mathbf{0}$
\FOR{$t = 1 ... T$}
  %\FOR{$(e_1, e_2) \in \textrm{E} \times \textrm{E}$}
  \FOR{$i = 1 ... n$}
      %\STATE $\mathbf{y}^* \gets \mbox{\bf relVector}(e_1, e_2)$
      \STATE $(\mathbf{y}',\mathbf{z}') \gets \operatorname{arg\,max}_{\mathbf{y},\mathbf{z}} p(\mathbf{y}, \mathbf{z} | \mathbf{x_i} ; \theta)$ \\
      \IF{$\mathbf{y}' \neq \mathbf{y_i}$}
	\STATE $\mathbf{z}^* \gets \operatorname{arg\,max}_{\mathbf{z}} p(\mathbf{z} | \mathbf{x_i}, \mathbf{y_i}; \theta) $ \\
           \STATE $\mathbf{\Theta} \gets \mathbf{\Theta} + \phi(\mathbf{x_i},\mathbf{z}^*) - \phi(\mathbf{x_i},\mathbf{z}')$
      \ENDIF
  \ENDFOR
\ENDFOR
\STATE {\bf Return} $\Theta$
\end{algorithmic}
\vspace{-10pt}
\caption{The \sys\ Learning Algorithm}
\vspace{-5pt}
\label{fig:learn}
\end{figure}

\section{Learning}

We now present a multi-instance learning algorithm for our
weak-supervision model that treats the sentence-level extraction
random variables $Z_i$ as latent, and uses facts from a database (\eg,
Freebase) as supervision for the aggregate-level variables $Y^r$.

As input we have (1) $\Sigma$, a set of sentences, (2) $E$, a set of
entities mentioned in the sentences, (3) $R$, a set of relation names,
and (4) $\Delta$, a database of atomic facts of the form $r(e_1, e_2)$
for $r\in R$ and $e_i\in E$.  Since we are using weak learning, the
$Y^r$ variables in $\mathbf{Y}$ are not directly observed, but can be
approximated from the database $\Delta$.  We use a procedure, {\bf
  relVector}($e_1, e_2$) to return a bit vector whose $j^{\mbox{th}}$
bit is one if $r_j(e_1, e_2)\in\Delta$. The vector does {\em not} have
a bit for the special {\em none} relation; if there is no relation
between the two entities, all bits are zero.  

Finally, we can now define the training set to be pairs
$\{(\mathbf{x}_i, \mathbf{y}_i) | i = 1 \ldots n\}$, where $i$ is an
index corresponding to a particular entity pair $(e_j,e_k)$,
$\mathbf{x}_i$ contains all of the sentences with mentions of this
pair, and $\mathbf{y}_i= ~${\bf relVector}($e_j, e_k$).
  
Given this form of supervision, we would like to find the setting for $\theta$ with the highest
likelihood:
\[
O(\theta) = \prod_{i} p(\mathbf{y}_i |\mathbf{x}_i; \theta) = \prod_i\sum_\mathbf{z} p(\mathbf{y}_i, \mathbf{z} |\mathbf{x}_i; \theta)
\]
  
However, this objective would be difficult to optimize exactly, and
algorithms for doing so would be unlikely to scale to data sets of the
size we consider.  Instead, we make two approximations, described
below, leading to a Perceptron-style additive~\cite{collins02}
parameter update scheme which has been modified to reason about hidden
variables, similar in style to the approaches
of~\cite{liang06discrimative,zettlemoyer2007online}, but adapted for our specific model.
This approximate algorithm is computationally efficient and,
as we will see, works well in practice.

Our first modification is to do online learning instead of optimizing the full objective.   Define the feature sums 
$\phi(\mathbf{x},\mathbf{z}) = \sum_j \phi(x_j,z_j)$ which range over the sentences, as indexed by $j$.
Now, we can define an update based on the gradient of the local log likelihood for example $i$:\\[5pt]

$\frac{\partial \log O_i(\theta)}{\partial \theta_j} = E_{p(\mathbf{z}|\mathbf{x}_i,\mathbf{y}_i;\theta)} [\phi_j(\mathbf{x}_i,\mathbf{z})]$\\
\hspace*{1.3in}$- E_{p(\mathbf{y},\mathbf{z}|\mathbf{x}_i;\theta)} [\phi_j(\mathbf{x}_i,\mathbf{z})] \nonumber
$\\[5pt]


\noindent 
where the deterministic OR $\Phi^{\text{join}}$ factors ensure that the first expectation assigns positive probability only to assignments that produce the labeled facts $\mathbf{y}_i$ but that the second considers all valid sets of extractions.   

Of course, these expectations themselves, especially the second one, would be difficult to compute exactly.   Our second modification is to do a Viterbi approximation, by replacing the expectations with maximizations.    Specifically, we compute the most likely sentence extractions for the label facts $\operatorname{arg\,max}_{\mathbf{z}}p(\mathbf{z} | \mathbf{x}_i,\mathbf{y}_i ; \theta)$ and the most likely extraction for the input, without regard to the labels, $\operatorname{arg\,max}_{\mathbf{y},\mathbf{z}}p(\mathbf{y}, \mathbf{z} | \mathbf{x}_i; \theta)$.  We then compute the features for these assignments and do a simple additive update.   The final algorithm is detailed in Figure~\ref{fig:learn}.



