

\subsubsection{Maximum a Posteriori Inference}
The goal of the inference is to compute the predictions $z,\mathbf{y}$ yielding the greatest probability \ie~ $z^*,\mathbf{y}^* = \arg\max_{z,\mathbf{y}} p(Z=z, \mathbf{Y} = \mathbf{y}|\mathbf{x}; \Theta)$. This is an MAP inference problem.

In general, the inference in graphical model is challenging. Fortunately, the joint factors in our model are linear; the event and relation factors are log-linear, we can cast the MAP inference into Integer Linear Programming (ILP), and then compute an approximation in polynomial time by addressing the problem by means of the linear programming and using randomized rounding, as proposed in~\cite{Yannakakis92}.

We build one ILP problem for every \eec. The variables of the ILP correspond to $Z$ and $\mathbf{Y}$. They only take values of 0 or 1. The object function is the sum of log of the event and the relation factors $\Phi^Z$ and $\Phi^Y$. The zero passing operator of $\Phi^{\text{joint}}$ is encoded as a linear inequality constraint $z>y_i$; every mutual exclusive operator of $\Phi^{\text{joint}}$ is encoded as a constraint $\sum_{y_i\in \mathbf{y}_d} y_i \leq 1$.

\def\gold{^{\mathsf{gold}}}

\subsubsection{Learning}
Our training data is composed of $N$ labeled \bag\ in the form of $\{(R_i,R\gold_i)\mid_{i=1}^N\}$. Each $R$ is the set of all relations in the \bag\ while $R\gold$ is a manually created subset of $R$ containing relations describing the \eec. $R\gold$ could be empty if the \eec\ is not good for clustering. For our model, the gold assignment ${y^r}\gold=1$ if $r\in R\gold$; the gold assignment $z\gold=1$ if $R\gold$ is not empty.

Given $\{(R_i,R\gold_i)\mid_{i=1}^N\}$, learning over similar models is commonly done via maximum likelihood estimation as follows:

\[
L(\Theta) = \log \prod_i p(Z_i=z\gold_i,\mathbf{Y}_i=\mathbf{y}\gold_i\mid\mathbf{x}_i,\Theta)
\]

For features in relation factors, the partial derivative on the $i$th model is:
\[
\Phi_j(\mathbf{y}\gold_i,\mathbf{x}_i)-E_{p(z_i,\mathbf{y}_i\mid ,\mathbf{x}_i,\Theta)}\Phi_j(\mathbf{y}_i,\mathbf{x}_i)
\]
where $\Phi_j(\mathbf{y}_i, \mathbf{x}_i)=\sum \phi_j(X,Y,\mathbf{x})$, that is the sum of values for the $j$th
feature in the $i$th model, values of $X,Y$ come from the assignment $\mathbf{y}_i$. For features in event variable, the partial derivative is derived similarly as
\[
\phi_j(z\gold_i,\mathbf{x}_i)-E_{p(z_i,\mathbf{y}_i\mid ,\mathbf{x}_i,\Theta)}\phi_j(z_i,\mathbf{x}_i)
\]

It is unclear how to efficiently compute
$E_{p(z_i,\mathbf{y}_i\mid ,\mathbf{x}_i,\Theta)}\phi_j(z_i,\mathbf{x}_i)$ and $E_{p(z_i,\mathbf{y}_i\mid ,\mathbf{x_i},\Theta)}\Phi_j(\mathbf{y}_i,\mathbf{x}_i)$: a brute force approach requires enumerating all assignments of $\mathbf{y}_i$, which is exponential large with the number of relations. Instead, we opt to use a more tractable perceptron learning~\cite{collins02,hoffmann2011knowledge}. Instead of computing the expectations, we
simply compute $\phi_j(z^*_i,\mathbf{x}_i)$ and $\Phi_j(\mathbf{y}^*_i,\mathbf{x}_i)$, where $z^*_i,\mathbf{y}^*_i$ is
the assignment with the highest probability, generated by the MAP inference algorithm using the current weight vector. The weight updates are the following:
\begin{align}
&\Phi_j(\mathbf{y}\gold_i,\mathbf{x}_i) - \Phi_j(\mathbf{y}^*_i,\mathbf{x}_i)\label{update_relation_variable}\\
&\phi_j(z\gold_i,\mathbf{x}_i) - \phi_j(z^*_i,\mathbf{x}_i)
\end{align}

The updates can be intuitively explained as penalties on errors. In sum, our learning algorithm consists of iterative following two steps: (1) infer the most probable assignment with the current weights. (2) update the weights by comparing inferred assignments and the truth assignment.

%Note that the focus of this paper is to obtain high precision results, we make a simple modification: we penalize the weight more on false positives than on false negatives. For example, when seeing a false positive $1=z^*_i>z\gold_i=0$, we multiply the update value by a rate $\delta>1$; otherwise the rate is $1$. Similar procedure can be applied to Equation \ref{update_relation_variable}.











