\section{Inference}

To support learning, as described above, we need 
to compute assignments $\operatorname{arg\,max}_{\mathbf{z}}p(\mathbf{z} | \mathbf{x},\mathbf{y}; \theta)$ and 
$\operatorname{arg\,max}_{\mathbf{y},\mathbf{z}}p(\mathbf{y}, \mathbf{z} | \mathbf{x}; \theta)$.   In this section, 
we describe algorithms for both cases that use the deterministic OR nodes to simplify the 
required computations.

Predicting the most likely joint extraction $\operatorname{arg\,max}_{\mathbf{y},\mathbf{z}}
p(\mathbf{y}, \mathbf{z} | \mathbf{x} ; \theta)$ can
be done efficiently given the structure of our model. In
particular, we note that the factors $\Phi^{\text{join}}$ represent
deterministic dependencies between $\mathbf{Z}$ and $\mathbf{Y}$,
which when satisfied do not affect the probability of the solution.
It is thus sufficient to independently compute an assignment for each  
sentence-level extraction variable $Z_i$, ignoring the deterministic 
dependencies. The optimal setting for the aggregate variables 
$\mathbf{Y}$ is then simply the assignment that is consistent with these extractions.
The time complexity is $O(|\text{R}|\cdot|\mathrm{S}|)$.

Predicting sentence level extractions given weak supervision facts, 
$\operatorname{arg\,max}_{\mathbf{z}} p(\mathbf{z} | \mathbf{x}, \mathbf{y}; \theta)$, 
is more challenging. We start by computing extraction scores $\Phi^{\text{extract}}(x_i,z_i)$ 
for each possible extraction assignment $Z_i = z_i$ at each sentence $x_i \in \mathrm{S}$, 
and storing the values in a dynamic programming table.  Next, we must find the 
most likely assignment $\mathbf{z}$ that respects our output variables $\mathbf{y}$. It 
turns out that this problem is a variant of the weighted, edge-cover problem, for which
there exist polynomial time optimal solutions. 

Let $G=(\mathcal{E}, \mathcal{V} = \mathcal{V}^{\mathrm{S}} \cup
\mathcal{V}^{\mathbf{y}})$ be a complete weighted bipartite graph with
one node $v^{\mathrm{S}}_i \in \mathcal{V}^{\mathrm{S}}$ for each
sentence $x_i \in \mathrm{S}$ and one node $v^{\mathbf{y}}_r \in
\mathcal{V}^{\mathbf{y}}$ for each relation $r \in \mathrm{R}$ where
$y^r = 1$.  The edge weights are given by $c((v^{\mathrm{S}}_i,
v^{\mathrm{\mathbf{y}}}_r)) \stackrel{\text{\tiny def}}{=}
\Phi^{\text{extract}}(\mathbf{x}_i, z_i)$.  Our goal is to select a
subset of the edges which maximizes the sum of their weights, subject
to each node $v^{\mathrm{S}}_i \in \mathcal{V}^{\mathrm{S}}$ being
incident to exactly one edge, and each node $v^{\mathbf{y}}_r \in
\mathcal{V}^{\mathbf{y}}$ being incident to at least one edge.

\begin{figure}[tb]
\centering
\includegraphics[width=3.1in]{graph-matching.pdf}
\label{graph-matching}
\caption{Inference of $\operatorname{arg\,max}_{\mathbf{z}} p(\mathbf{Z} = \mathbf{z} | \mathbf{x}, \mathbf{y})$ requires solving a weighted, edge-cover problem.}
\vspace{-5pt}
\end{figure}

\paragraph{Exact Solution}

An exact solution can be obtained by first computing the maximum
weighted bipartite matching, and adding edges to nodes which are not
incident to an edge. This can be computed in time
$O(|\mathcal{V}|(|\mathcal{E}|+|\mathcal{V}| \log |\mathcal{V}|))$,
which we can rewrite as
$O((|\mathrm{R}|+|\mathrm{S}|)(|\mathrm{R}||\mathrm{S}|+(|\mathrm{R}|+|\mathrm{S}|)\log
(|\mathrm{R}|+|\mathrm{S}|)))$.

%\[
%\operatorname{max}
%\]

\paragraph{Approximate Solution}

An approximate solution can be obtained by iterating over the nodes in
$\mathcal{V}^{\mathbf{y}}$, and each time adding the highest weight
incident edge whose addition doesn't violate a constraint.  The
running time is $O(|\mathrm{R}||\mathrm{S}|)$.  This greedy search
guarantees each fact is extracted at least once and allows any
additional extractions that increase the overall probability of the
assignment.  Given the computational advantage, we use it in all of
the experimental evaluations.


% 1. Linear Programming

% Greedy Approximation

% Network Flow, Bipartite graph matching

