
\section{Model}

Our undirected model is designed to predict (1) the set of all relations which, according to the text
corpus, hold between a given pair of named entities, and (2) the exact sentences where these relations are
expressed. 

The input are unnormalized entities, obtained by extracting NNPs from a parsed corpus.
While the output contains normalized relations for pairs of such entities, it does {\em not} 
contain normalized entities, and it does {\em not} contain type information.

Figure \ref{fig-plate} shows a plate model. There exists a connected component for each 
pair of entities $(e_1,e_2) \in \text{E} \times \text{E}$; each component contains one Boolean output variable 
$Y^{r}$ for each type of relation $r \in \text{R}$ which we would like to predict. For each sentence $i \in \text{S}$
containing the given pair of entities, there exists a latent variable $Z^{\text{r}}$ which ranges over
the type of valid relations and, importantly, the distinct value $\mathsf{none}$. In addition, there
are latent variables $Z^{\text{t}_1}_i$ and $Z^{\text{t}_2}_i$ which predict the most likely
types of unnormalized entities in a given sentence $i$.

\begin{figure*}[htb]
\centering
\subfigure[]{
  \includegraphics[width=1.7in]{plate-model.pdf}
  \label{fig-plate}
}
\subfigure[]{
  \includegraphics[width=4.5in]{fig.pdf}
  \label{fig-example}
}
\caption{(a)Network structure depicted as plate model and (b) an example network instantiation for the pair of entities {\bf Bill Gates}, {\bf Microsoft}.}
\end{figure*}

Figure \ref{fig-example} shows an example instantiation of the model.

\begin{table*}[htb]
\begin{tabular}{ll}
$\mathrm{R}$ & Set of valid relations, eg. `founderOf`, `bornIn' \\
$\mathrm{T}$ & Set of vald argument types, eg. `Person', `Location' \\
$\mathrm{S}$ & Set of sentences $i$, each containing a pair of mentions of the given entities \\
$\mathbf{x}$ & Vector of observed evidence $\mathbf{x}_i$ for each mention $i$ of given pair of entities \\
$\mathbf{Z}$ & Vector of latent variables $\mathbf{Z}_i = (Z^{\text{t}_1}_i, Z^{\text{r}}_i, Z^{\text{t}_2})$, representing type of relation $Z^{\text{r}}_i$ and \\ & types of arguments $Z^{\text{t}_1}_i$, $Z^{\text{t}_2}_i$ for each $i \in \mathrm{S}$ \\ 
$\mathbf{Y}$ & Vector of Boolean output variables $Y^r$ for each relation $r \in \text{R}$ \\
\end{tabular}
\caption{Notation}
\end{table*}


\subsection{Conditional Probability Distribution}

Our conditional probability distribution is defined as follows:
\begin{eqnarray*}
\lefteqn{p(\mathbf{Y} = \mathbf{y}, \mathbf{Z} = \mathbf{z}|\mathbf{x}) \stackrel{\text{\tiny def}}{=} }\\
    && \frac{1}{Z_{\mathbf{x}}}
        \prod_r
        \Phi^{\text{join}}(y^r,\mathbf{z})
        \prod_i
	\Phi^{\text{extract}}(\mathbf{z}_i, \mathbf{x}_i)
\end{eqnarray*}

The factors $\Phi^{\text{join}}$ implement a deterministic OR operator; since we would like to
write the full distribution as an exponentiated sum, we also define $\Phi^{\text{join}}$ using exponentiation:
\[
\Phi^{\text{join}}(y^r,\mathbf{z}) \stackrel{\text{\tiny def}}{=}
\exp \left( \theta^{\text{join}} \phi^{\text{join}}(y^r, \mathbf{z}) \right)
\]

The deterministic OR is enforced by setting $\theta^{\text{join}} \stackrel{\text{\tiny def}}{=} -\infty$ and 
defining feature function $\phi^{\text{join}}$ as
\[
\phi^{\text{join}}(y^r, \mathbf{z})
\stackrel{\text{\tiny def}}{=}
\begin{cases}
1  & \neg \exists i : z_i = r  \\
0  & otherwise. \\
\end{cases}
\]

The extraction factors $\Phi^{\text{extract}}$ are given by
\begin{eqnarray*}
\lefteqn{\Phi^{\text{extract}}(\mathbf{z}_i, \mathbf{x}_i) \stackrel{\text{\tiny def}}{=}} \\ 
    &&  \Phi^{\text{t}_1}(z^{\text{t}_1}_i,\mathbf{x}_i)
        \Phi^{\text{t}_2}(z^{\text{t}_2}_i,\mathbf{x}_i)
        \Phi^{\text{r}}(z^{\text{r}}_i,\mathbf{x}_i) \\
    &&  \Phi^{\text{sp}_1}(z^{\text{r}}_i,z^{\text{t}_1}_i)
        \Phi^{\text{sp}_2}(z^{\text{r}}_i,z^{\text{t}_2}_i)
\end{eqnarray*}
Here, factors $\Phi^{\text{t}_1}$ and $\Phi^{\text{t}_2}$ are defined as follows:
\[
\Phi^{\text{t}_1}(z^{\text{t}_1}_i,\mathbf{x}_i) \stackrel{\text{\tiny def}}{=}
\exp \left( \sum_j \theta^{\text{t}_1}_{j} \phi^{\text{t}_1}_j(z^{\text{t}_1}_i,\mathbf{x}_i) \right)
\]
The Boolean feature functions $\phi^{\text{t}_1}_j$ indicate entity types,
and include lexicon features.

The factors $\Phi^{\text{sp1}}$ and $\Phi^{\text{sp2}}$ model type
compatibilities between relations and arguments. $\Phi^{\text{sp1}}$ is defined as
\[
\Phi^{\text{sp}_1}(z^{\text{r}}_i,z^{\text{t}_1}_i) \stackrel{\text{\tiny def}}{=}
\exp \left( \sum_{r \in R, t \in T} \theta^{\text{sp}_1}_{r,t} \phi^{\text{sp}_1}_{r,t}(z^{\text{r}}_i,z^{\text{t}_1}_i)  \right),
\]
where the feature functions $\phi^{\text{sp}_1}$ are
\[
\phi^{\text{sp}_1}_{r,t} (z^{\text{r}}_i,z^{\text{t}_1}_i) \stackrel{\text{\tiny def}}{=}
\begin{cases}
1 & r = z^{\text{r}}_i, t = z^{\text{t}_1}_i \\
0 & otherwise,
\end{cases}
\]
and $\Phi^{\text{sp}_2}$ and $\phi^{\text{sp}_2}$ are defined accordingly.

\subsection{Inference}

For inference, we are interested in the marginal probabilities $p(Y^r=y^r|\mathbf{x})$, which
can be obtained by summing out over the remaining output variables $\mathbf{Y} \setminus Y^r$ as well as
the latent variables $\mathbf{Z}$. However, this is computationally expensive, and so we instead
only compute the most likely configuration (MAP inference) and use that as an approximation.

Predicting $\operatorname{arg\,max}_{\mathbf{y},\mathbf{z}} p(\mathbf{Y} =\mathbf{y}, \mathbf{Z} = \mathbf{z} | \mathbf{x})$ can be efficiently implemented due to the structure of the model. In
particular, we note that the factors $\Phi^{\text{join}}$ represent deterministic dependencies
between $\mathbf{Z}$ and $\mathbf{Y}$, which when satisfied do not affect the probability of a
partial solution. It is thus sufficient to first compute an assignment to $\mathbf{Z}$, ignoring
the deterministic dependencies, and then using the deterministic dependencies to obtain an
assignment for $\mathbf{Y}$. The complexity is $O(|\text{R}|\cdot|\text{T}|\cdot|\text{S}|)$.

Predicting $\operatorname{arg\,max}_{\mathbf{z}} p(\mathbf{Z} = \mathbf{z} | \mathbf{x}, \mathbf{y})$
is slightly more challenging. We can start by computing scores for each possible assignment to
$\mathbf{Z}_i$, $i \in \mathrm{S}$ and storing these in a dynamic programming table. In the next
step, we compute an assignment that respects our output variables $\mathbf{y}$. It turns out that
this problem is a variant of the weighted edge cover problem, for which there exist polynomial time
optimal solutions.

Let $G=(\mathcal{E}, \mathcal{V} = \mathcal{V}^{\mathrm{S}} \cup \mathcal{V}^{\mathbf{y}})$ be a
complete weighted bipartite graph with one node $v^{\mathrm{S}}_i \in \mathcal{V}^{\mathrm{S}}$ 
for each sentence $i \in \mathrm{S}$ and one node $v^{\mathbf{y}}_r \in \mathcal{V}^{\mathbf{y}}$ for each relation $r \in \mathrm{R}$ where $y^r = 1$.
The edge weights are given by $c((v^{\mathrm{S}}_i, v^{\mathrm{\mathbf{y}}}_r)) \stackrel{\text{\tiny def}}{=} \Phi^{\text{extract}}(\mathbf{x}_i, z_i)$.
Our goal is to select a subset of the edges which maximizes the sum of their weights, subject to
each node $v^{\mathrm{S}}_i \in \mathcal{V}^{\mathrm{S}}$ being incident to exactly one edge, and
each node  $v^{\mathbf{y}}_r \in \mathcal{V}^{\mathbf{y}}$ being incident to at least one edge. 

\begin{figure}[htb]
\centering
\includegraphics[width=3in]{graph-matching.pdf}
\label{graph-matching}
\caption{Inference of $\operatorname{arg\,max}_{\mathbf{z}} p(\mathbf{Z} = \mathbf{z} | \mathbf{x}, \mathbf{y})$ requires solving a weighted edge cover problem.}
\end{figure}

\paragraph{Exact Solution}
An exact solution can be obtained by first computing the maximum weighted bipartite matching,
and adding edges to nodes which are not incident to an edge. This can be computed in time
$O(|\mathcal{V}|(|\mathcal{E}|+|\mathcal{V}| \log |\mathcal{V}|))$, which we can rewrite as
$O((|\mathrm{R}|+|\mathrm{S}|)(|\mathrm{R}||\mathrm{S}|+(|\mathrm{R}|+|\mathrm{S}|)\log (|\mathrm{R}|+|\mathrm{S}|)))$.

%\[
%\operatorname{max}
%\]

\paragraph{Approximate Solution}
An approximate solution can be obtained by iterating over the nodes in $\mathcal{V}^{\mathbf{y}}$, and each time
adding the highest weight incident edge which can be added without violating a constraint.
The running time is $O(|\mathrm{R}||\mathrm{S}|)$.


% 1. Linear Programming

% Greedy Approximation

% Network Flow, Bipartite graph matching


\subsection{Learning}

We use an adaptation of the Voted Perceptron algorithm for learning. Our adaptation respects 
the presence of the latent variables $\mathbf{Z}$ and leverages the deterministic OR factors $\Phi^{\text{join}}$
for more accurate updates.

We also note that $\mathbf{Y}$ are not directly observed: We create an oracle which can
tell the values of $\mathbf{Y}$ by matching the pair of entities to entities in Freebase and
checking which relations hold between these entities in Freebase. To ensure a quality training
set, we only consider entities during training which we can match to Freebase entities.

Our algorithm is as follows:

\algsetup{indent=0.5em}
\begin{algorithmic}
\STATE Initialize parameter vector $\mathbf{\Theta} \gets \mathbf{0}$.
\FOR{$t = 1 ... T$}
  \FOR{$(e_1, e_2) \in \textrm{E} \times \textrm{E}$}
      \STATE $\mathbf{y}^0 \gets \operatorname{oracle} (e_1, e_2)$
      \STATE $\mathbf{z}^0 \gets \operatorname{arg\,max}_{\mathbf{z}} p(\mathbf{Z} = \mathbf{z} | \mathbf{x}, \mathbf{y}^0) $ \\
      \STATE \hspace{2em} $ = \operatorname{arg\,max}_{\mathbf{z}} \phi(\mathbf{x},\mathbf{y}^0,\mathbf{z}) \cdot \mathbf{\Theta}$
      \STATE $(\mathbf{z}^*,\mathbf{y}^*) \gets \operatorname{arg\,max}_{\mathbf{y},\mathbf{z}} p(\mathbf{Y}=\mathbf{y}, \mathbf{Z}=\mathbf{z} | \mathbf{x})$ \\
      \STATE \hspace{2em} $ = \operatorname{arg\,max}_{\mathbf{y},\mathbf{z}} \phi(\mathbf{x},\mathbf{y},\mathbf{z})\cdot \mathbf{\Theta}$
      \IF{$\mathbf{y}^* \neq \mathbf{y}$}
           \STATE $\mathbf{\Theta} \gets \mathbf{\Theta} - \phi(\mathbf{x},\mathbf{y}^*,\mathbf{z}^*) + \phi(\mathbf{x},\mathbf{y}^0,\mathbf{z}^0)$
      \ENDIF
  \ENDFOR
\ENDFOR
\end{algorithmic}

