%\begin{figure*}[htb]
%\centering
%\subfigure[]{
%  \includegraphics[width=1.7in]{plate-model.pdf}
%  \label{fig-plate}
%}
%\subfigure[]{
%  \includegraphics[width=4.6in]{fig.pdf}
%  \label{fig-example}
%}
%    \vspace{-5pt}
%\caption{(a) Network structure depicted as plate model and (b) an
%  example network instantiation for the pair of entities {\tt Steve Jobs}, {\tt Apple}.}
%  \vspace{-5pt}
%\end{figure*}

\section{Relation Clustering Framework}
\label{section_framework}
The main goal of this work is to pursue high precision relation clusters.  News streams are promising resources for our goal: articles by different sources describing the same daily events using semantic equivalent phrases. These phrases could compose high-quality relation clusters. For example, one day the person will read the news: \mtt{``Armstrong steps down from Livestrong''}; \mtt{``Armstrong resigns from Livestrong''} and \mtt{``Armstrong cuts ties with Livestrong''}. From them, a good relation cluster\mtt{\{step down from, resign from, cuts ties with\}} can be composed. 

We design a particular framework to cluster relations based on the above observation. It has been noticed that entity pairs carry strong information to suggest relations and events~\cite{hasegawa2004discovering,sekine2005automatic}. In practice, a question like ``What happened to Armstrong and Livestrong on Oct 17?'' could often lead to the unique event. It means that using an argument pair and a timestamp could be an effective way to identify an event.

Suppose we employ Open Information Extraction system on the news stream, we would obtain a set of $(a_1,r,a_2,t)$ tuples, where $a_i$s are arguments, $r$ is a relation phrase and $t$ is the day of the corresponding news article. If $(a_1,a_2,t)$ (\eg\ (Armstrong, Livestrong, Oct 17)) suggests a real word event,  the relation $r$ of $(a_1,r,a_2,t)$ is likely to describe that event. So when a set of relations tie to the same $(a1,a2,t)$, those relations describing the particular event are synonymous. We call every $(a_1, a_2, t)$ an {\em \Eec} (\eec). We refer to the set of relations tie to the \eec\ as the {\em \bag}, and denote it by $(a_1,a_2,t,\{r_1\ldots r_m\})$. As shown in Figure~\ref{systemoverview}, we frame our relation clustering problem as finding these semantically equivalent relations in the \bag, and then generate the relation cluster.

\section{Refined Temporal Hypotheses and Models}
\label{section_temporal}
In this section, we propose a series of refined temporal hypotheses that are useful for high precision relation clustering. We propose several models to capture them, and analyze the pros and cons for each of them.

\subsection{Entity Hypothesis \& Basic Model}
\label{section_basic_model}

The basic observation in section~\ref{section_framework} can be expressed in the hypothesis:

\newtheorem{hypothesis}{Hypothesis}
\begin{hypothesis}
	\label{hypo_one}
	We assume that the news articles published at the same time and mentioning the same entities tend to describe the same events.
\end{hypothesis}


The basic method to capture the hypothesis \ref{hypo_one} is to simply predict that all $r_i$ in a \bag\ describe the \eec, and hence are semantically equivalent. That is, given \bag\ $(a_1,a_2,t,\{r_1\ldots r_m\})$, the output relation cluster is $\{r_1\ldots r_m\}$.

Empirical investigation of this basic method shows the soundness of hypothesis \ref{hypo_one}. Moreover, it shows whether the news streams processed by OpenIE, which is the input of our learning system, are promising for the relation clustering. It is not surprising that the basic model could introduce many errors. We will analyze them and propose solutions in the following sections.

\subsection{News Spike Hypothesis \& Pairwise Model}
\label{section_news_spike}
Not all the \Eec s are equally good for clustering. In one day's news, the person will see both \mtt{``Barack Obama heads to the White House''} and \mtt{``Barack Obama gives speech at the White House''} Although the two sentences are highly similar in text (sharing all words except for the relation phrases), and published at the same time, they are not describing the same event. From a generative point of view, we can treat each sentence as being generated by a particular hidden event. It is clear that more hidden events generate the argument pair \mtt{(Barack Obama, the White House)}; while fewer hidden events generate \mtt{(Armstrong, Livestrong)}. The latter are more appropriate for clustering because the relations tie to it are more likely to describe an particular event. The question becomes how one can distinguish these appropriate \eec.

When there are many hidden events able to generate an entity pair, it is fair to believe that some of them are less sensitive to time. For example, \mtt{``Barack Obama heads to the White House''} could appear in any day's news. Therefore we could judge whether an entity pair is appropriate for clustering by looking at the history of the frequencies that the pair is mentioned in the news streams, which is the {\em time series} of that entity pair. The time series of the entity pair (Barack Obama, the White House) tend to be very random with many fluctuations, while the time series of the entity pair (Armstrong, Livestrong) tend to be flat for a long time and suddenly to spike one day. This observation lead to our hypothesis:

%For our clustering purpose, the appropriate \eec s for clustering tie to fewer event. To find them out, we analyze the time series of the arguments and argument pairs. A time series of an entity can be defined as a sequence of times resulting from the entity mentioned in one day's news. Intuitively, an entity concerned with an important event tends to be very uncommon for a long time but is suddenly mentioned in many news articles on a certain day. The latter results in our second hypothesis.

\begin{hypothesis}
	\label{hypo_two}
We assume that given the fact that an entity or an entity pair appears with a greater frequency in one day's news than in the history, the corresponding \eec s are likely to be appropriate for relation clustering.
\end{hypothesis}

Hypothesis~\ref{hypo_two} means that an appropriate \eec\ $(a_1,a_2,t)$ tends to have a {\em spike} in the time series of entity $a_i$ (or entity pair $(a_1,a_2)$) on day $t$. To capture this hypothesis, we introduce a set of {\em spike features} to find appropriate \eec s. Given $(a_1,a_2,t)$, some example features include: the covariance of the time series; the size of the spike; binary features implying whether $a_i$s have appeared in days before $t$ and so on.

A simple learning model for employing spike features could be a {\em pairwise model}: we could compare pairs of relations in the \bag, and predict each pair with regard to synonymous or non-synonymous. Afterwards the output clusters could be generated according to some heuristic rules (\eg\ assuming transitivity among synonyms).

\eat{Besides spike features, a pairwise model can also include features over sentence pairs. For example, tense features are useful, such as if one relation phrase has past tense while the other has future tense. Note two sentences {\tt Armstrong was chairmen of Livestrong.} and {\tt Armstrong steps down from Livestrong.}, Past and present tense of the two sentences suggest the relation phrases may describe different events and may not be synonymous.}

It is noteworthy that the setting of the pairwise model could be similar to that of paraphrasing techniques~\cite{dolan2005automatically,SocherEtAl2011:PoolRAE}, which examine two sentences and determine whether they are semantically equivalent. In section~\ref{empirical_study}, we will evaluate the effect of applying paraphrasing techniques for relation clustering.

\subsection{Constraint Hypothesis \& Joint Clustering Model}
\label{section_joint_cluster_model}
Hypothesis~\ref{hypo_two} assists us in finding an appropriate \eec s for clustering. However, it is likely that a sequence of related but not synonymous relations regarding an appropriate \eec. Consider a piece of article containing sentences: \mtt{``Armstrong is the founder of Livestrong. Armstrong steps down from Livestrong.''} These related but not synonymous relations could introduce errors. To handle that, we propose:

\begin{hypothesis}
	\label{hypo_three}
A news article tends not to state a fact more than once.
\end{hypothesis}

Hypothesis \ref{hypo_three} is proposed to trade recall for high-precision. It implies that there is at most one relation phrase (maybe zero) in each news article describing a particular event. Algorithms based on this hypothesis tend to choose the most likely relation phrase. Moreover, hypothesis \ref{hypo_three} allows us to collect relation phrases that are cause-effect candidates, such as \mtt{(shoot, kill)}, which often appear together in news articles with the same arguments. These pairs are less likely to be synonymous, and could be used to create certain features in our model.

Combining hypotheses altogether is not that straightforward. %One approach would be to gather a list of heuristic in order to create relation clusters satisfying the hypotheses. It is more or less ad-hoc and lacks the flexibility for  adapting new hypotheses.
An common approach is to introduce a joint model with hypotheses encoded as features and constraints. Data, instead of ad-hoc rules, indicates the relevance of different insights, which could be learned jointly as parameters. New hypothesis could be easily introduced. Suppose there is a hypothesis about tense as:

\begin{hypothesis}
	\label{hypo_four}
The semantically equivalent relations, which describe a particular event, tend to have the same tense.
\end{hypothesis}

Hypothesis ~\ref{hypo_four} can be used to stop clustering relations happened in different same time periods. For example, two sentences \mtt{``Armstrong was the chairman of Livestrong''} and \mtt{``Armstrong steps down from Livestrong''} have past and present tense respectively, which suggests that the relation phrases are less likely to describe the same event so they are not semantically equivalent.

A joint model can capture it by introducing certain features with minimum effort. Researches on Coreference Resolution (CR) have inspired us: a joint, clustering model could better capture global constraints and yield high-quality results than pairwise CR models~\cite{rahman2009supervised}.

We propose an undirected graphical model, \sys, which is able to cluster relations jointly. Global constraints can be captured by means of the factors connecting multiple random variables. We introduce random variables, the factors, the objective function, the inference algorithm and learning algorithm in the following sections. Figure~\ref{graphmodel} shows an example model for \eec\ (Armstrong, Livestrong, Oct 17).
\begin{figure}
\centering
\includegraphics[width=3.1in]{graphmodel.pdf}
\caption{an example model for \eec\ (Armstrong, Livestrong, Oct 17). $Y$ and $Z$ are binary random variables. $\Phi^Y$, $\Phi^Z$ and $\Phi^{\text{joint}}$ are factors. \mtt{be founder of} and \mtt{step down} come from article 1 while \mtt{give speech at}, \mtt{be chairman of} and \mtt{resign from} come from article 2.}
\label{graphmodel}
\end{figure}

\subsubsection{Random Variables}
For \bag\ $(a_1,a_2,t,\{r_1,\ldots r_m\})$, we introduce one event variable and $m$ relation variables. They are boolean valued. The event variable $Z^{(a_1,a_2,t)}$ indicates whether $(a_1,a_2,t)$ is a appropriate event for clustering. It is designed in accordance with hypothesis \ref{hypo_two}: for the \eec\ like (Barack Obama, the White House, Oct 17), $Z$ should be assigned to 0.

The relation variable $Y^r$ indicates whether relation $r$ describes the \eec\ $(a_1,a_2,t)$ or not. All event-describing relations with $Y^r=1$ could compose a cluster. For example, the assignments $Y^{\textit{step\ down}}=Y^{\textit{resign}}=1$ produce a relation cluster \mtt{\{step down, resign\}}.

\subsubsection{Factors and the Joint Distribution}
In this section, we introduce a conditional probability model defining a joint distribution over all of the event and relation variables. The joint distribution is a function over the {\it factors}. Our model contains the {\em event factors}, the {\em relation factors} and the {\em joint factors}.

The event factor $\Phi^Z$ is a log-linear function with spike features, used to distinguish appropriate events. A relation factor $\Phi^Y$ is also a log-linear function. It can be defined for the single relation variables (\eg\ $\Phi^Y_1$ in Figure \ref{graphmodel}) with features like if a relation phrase is coming from a clausal complement: relation phrases in clausal complement are less useful for clustering because they are often not describe a fact. For example in the sentence \mtt{He heard Romney had won the election}, the extraction (Romney, had won, the election) is not a fact at all. Relation factor can also be defined for the two relation variables (\eg\ $\Phi^Y_2$ in Figure~\ref{graphmodel}) with features capturing the pairwise evidence for clustering, such as if two relation phrases would have the same tense.

The joint factors $\Phi^{\text{joint}}$ are defined to apply global constraints. They play two roles in our model: \textit{1)} to satisfy the hypothesis \ref{hypo_two}, when the prediction of the event variable is false, the \eec\ is not appropriate for clustering so all relation variables should take a false value, and \textit{2)} to satisfy hypothesis \ref{hypo_three}, the relation variables from the same article take the truth value at most once.

We define the joint distribution for our model with above random variables and factors. Let $\mathbf{Y}=\{Y^{r_i}\}\mid_{i=1}^m$ be the vector of relation variables; let $\mathbf{x}$ be the features. The joint distribution is:
\begin{eqnarray*}
\lefteqn{p(Z = z, \mathbf{Y} = \mathbf{y}|\mathbf{x}; \Theta) \stackrel{\text{\tiny def}}{=} \frac{1}{Z_{x}}\Phi^{Z}(z,\mathbf{x}) }\\
    && \prod_d \Phi^{\text{joint}}(z,\mathbf{y}_d,\mathbf{x})
        \prod_{i,j} \Phi^Y (y_i,y_j,\mathbf{x})
\end{eqnarray*}
\noindent
where $\mathbf{y}_d$ indicates the subset of relation variables from a particular article $d$, the parameter vector $\Theta$ is the weight vector of the features in $\Phi^Z$ and $\Phi^Y$, which are log-linear functions, \eg
\[
\Phi^{Y}(y_i,y_j,\mathbf{x}) \stackrel{\text{\tiny def}}{=}
\exp \left( \sum_j \theta_{j} \phi_j(y_i,y_j,\mathbf{x}) \right)
\]
where $\phi_j$ is $j$th feature function.

The joint factors $\Phi^{\text{joint}}$ is composed of a zero-passing operator and a mutual exclusive operator. $\Phi^{\text{joint}}$ become zeros when the hypothesis \ref{hypo_two} and \ref{hypo_three} are broken given the current assignment, otherwise it is 1. Formally, it is calculated as:
\[
\Phi^{\text{joint}}(z,\mathbf{y}_d,\mathbf{x})
\stackrel{\text{\tiny def}}{=}
\begin{cases}
0  & {\rm if} ~ z=0 \land \exists i : z_i = 1  \\
0  & {\rm if} ~ \sum_{y_i\in \mathbf{y}_d} y_i > 1  \\
1  & {\rm otherwise} \\
\end{cases}
\]






