%\begin{figure*}[htb]
%\centering
%\subfigure[]{
%  \includegraphics[width=1.7in]{plate-model.pdf}
%  \label{fig-plate}
%}
%\subfigure[]{
%  \includegraphics[width=4.6in]{fig.pdf}
%  \label{fig-example}
%}
%    \vspace{-5pt}
%\caption{(a) Network structure depicted as plate model and (b) an
%  example network instantiation for the pair of entities {\tt Steve Jobs}, {\tt Apple}.}
%  \vspace{-5pt}
%\end{figure*}

\section{System Overview}
\label{section_framework}

The main goal of this work is to generate high precision relation
clusters.  News streams are a promising resource, since articles from
different sources tend to use semantically equivalent phrases to
describe the same daily events. For example, when a recent scandal
hit, headlines read: \mtt{``Armstrong steps down from Livestrong''};
\mtt{``Armstrong resigns from Livestrong''} and \mtt{``Armstrong cuts
  ties with Livestrong''}. From these we can conclude that the
following relation phrases are semantically similar: \mtt{\{step down
  from, resign from, cuts ties with\}}.

To realize this intuition, our first challenge is to represent an
event. In practice, a question like {\it ``What happened to Armstrong
  and Livestrong on Oct 17?''} could often lead to an unique
answer. It implies that using an argument pair and a timestamp could
be an effective way to identify an event (\ie\ \mtt{(Armstrong,
  Livestrong, Oct 17)} for the above question). Based on this
observation, this paper introduces a novel mechanism to cluster
relations as summarized in Figure~\ref{systemoverview}.

%It has been noticed that entity pairs carry strong information to suggest relations and events~\cite{hasegawa2004discovering,sekine2005automatic}.


\sys\ first applies the ReVerb open information extraction (IE)
system~\cite{fader11} on the news streams to obtain a set of
$(a_1,r,a_2,t)$ tuples, where the $a_i$s are the arguments, $r$ is a
relation phrase and $t$ is the date-stamp of the corresponding news
article. When $(a_1,a_2,t)$ suggests a real word event, the relation
$r$ of $(a_1,r,a_2,t)$ is likely to describe that event
(\eg\ \mtt{(Armstrong, resign from, Livestrong, Oct 17)}). We call
every $(a_1, a_2, t)$ an {\em \Eec} (\eec), and every relation describing the event an {\em event-mention}.

For each \eec\ $(a_1,a_2,t)$, suppose there are $m$ extraction tuples $(a_1,r_1,a_2,t)\ldots (a_1,r_m,a_2,t)$ sharing that $(a_1,a_2,t)$. We refer to this set of extraction tuples as the {\em \bag}, and denote it by $(a_1,a_2,t,\{r_1\ldots r_m\})$. All these event-mentions in the \bag\ are semantically equivalent and compose a good relation cluster.

Therefore the relation clustering problem becomes a prediction
problem: whether every relation $r_i$ in the \bag\ is describing the event or not. The solution in this paper is to first exploit the temporal dimensions of the news streams and propose a series of temporal correspondence heuristics (see Section~\ref{section_temporal}). Then a
joint inference model is designed to use these heuristics (see
Section~\ref{section_model}).

\begin{figure}
\centering
\includegraphics[width=3.1in]{systemoverview.pdf}
\caption{\sys\ first applies open IE to articles in the news streams,
  obtaining shallow extractions with time stamps. Next, an
  \Eec\ (\eec) is obtained after grouping daily extractions by
  argument pairs. Temporal features and constraints are developed
  based on our temporal correspondence heuristics and encoded into a
  joint inference model. The model finally creates the relation
  clusters by predicting the relation phrases that describe the \eec.
%System overview: exploiting the temporal hypotheses to cluster relations in the news streams
}
\label{systemoverview}
\end{figure}


%In sum, we frame our relation clustering problem as finding these semantically equivalent relations in the \bag, and then generate the relation cluster.

\section{Temporal Correspondence Heuristics}
\label{section_temporal}
\theoremstyle{plain} \newtheorem{hypothesis}{H}



\newtheorem*{H1}{Temporal Functionality Heuristic}
\newtheorem*{H2}{Temporal Burstiness Heuristic}
\newtheorem*{H3}{One Event-Mention Per Discourse Heuristic}



In this section, we propose a series of temporal heuristics that are
useful to cluster relations at high precision. Our heuristics start
from the basic observation mentioned previously --- events can often
be uniquely determined by their arguments and time.  Additionally, we
find that it is not just the {\em publication time} of the news story
that matters, the {\em verb tenses} of the sentences are also
important. For example, two sentences \mtt{``Armstrong was the
  chairman of Livestrong''} and \mtt{``Armstrong steps down from
  Livestrong''} have past and present tense respectively, which
suggests that the relation phrases are less likely to describe the
same event so they are not semantically equivalent. To capture these
intuitions, we propose the {\em Temporal Functionality Heuristic}:

\begin{H1}
	\label{hypo_one}
  The news articles published at the same time, mentioning the same entities, and using the same tense tend to describe the same events.
\end{H1}


Unfortunately, we find that not all the event candidates %
$(a_1,a_2,t)$ are equally good for clustering. For example, today's
news might include both \mtt{``Barack Obama heads to the White
  House''} and \mtt{``Barack Obama greets reporters at the White
  House''}. Although the two sentences are highly similar in text
(sharing all words except for the relation phrases), and were
published at the same time, they are not describing the same event.

From a probabilistic point of view, we can treat each sentence as
being generated by a particular hidden event

\Bug{dan needs to keep editing this} all we can observe is mixture fo
sentences generated from different events.  Sometihing about a long
tailed distribution over actors with some participating in many events.
a

It is clear that more hidden
events could generate the entity pair \mtt{(Barack Obama, the White
  House)}; while fewer hidden events could generate \mtt{(Armstrong,
  Livestrong)}. The latter are more appropriate for clustering because
the relations tie to it are more likely to describe a same event. The
question becomes how one can distinguish these appropriate entity
pairs.

Our observation is, when there are many hidden events able to generate an entity pair, it is very likely that some of them are less sensitive to time. For example, \mtt{``Barack Obama heads to the White House''} could appear in any day's news. Therefore we could judge whether an entity pair is appropriate for clustering by looking at the history of the frequencies that the entity pair is mentioned in the news streams, which is the {\em time series} of that entity pair. The time series of the entity pair \mtt{(Barack Obama, the White House)} tend to be very random with fluctuations, while the time series of the entity pair \mtt{(Armstrong, Livestrong)} tend to be flat for a long time and suddenly to spike one day. This observation leads to:

\begin{H2}
	\label{hypo_two}
If an entity or an entity pair appears significantly more frequently in one day's news than in the history, the corresponding event candidates are likely to be good for relation clustering.
\end{H2}

The temporal burstiness heuristic implies that an appropriate \eec\ $(a_1,a_2,t)$ tends to have a {\em spike} in the time series of its entities $a_i$s, or argument pair $(a_1,a_2)$, on day $t$.

Given appropriate \eec s for clustering, it is likely that a sequence of related but not synonymous relations tie to that \eec. Consider a piece of article containing sentences: \mtt{``Armstrong is the founder of Livestrong. Armstrong steps down from Livestrong.''} These related but not synonymous relations could cause errors. Inspired by the idea of  one sense per discourse from ~\cite{gale1992one}, we propose:

\begin{H3}
	\label{hypo_three}
A news article tends not to state the same fact more than once.
\end{H3}

One event-mention per discourse heuristic is proposed to trade recall for high-precision. It implies that there is at most one relation phrase (maybe zero) in each news article describing a particular event. Algorithms based on this heuristic tend to choose the most likely relation phrase among all these from an article, and hence improve the precision. Moreover, the one event-mention per discourse heuristic allows us to collect pairs of relations that are cause-effect candidates, such as \mtt{(shoot, kill)}, which often successively appear in the news articles. Cause-effect candidates are less likely to be synonymous, and could be used to create features in the learning models.


\section{Exploiting the Temporal Heuristics}
\label{section_model}
In this section we propose several models to capture the temporal heuristics, and discuss their pros and cons.

\subsection{Baseline Model}
\label{section_basic_model}

The baseline model to use the \bag\ is to simply predict that all $r_i$ in the \bag\ are event-mentions, and hence are semantically equivalent. That is, given \bag\ $(a_1,a_2,t,\{r_1\ldots r_m\})$, the output relation cluster is $\{r_1\ldots r_m\}$.

The baseline model captures the most of temporal functionality heuristics, except for the tense requirement. Our empirical study shows that it performs surprisingly well. It demonstrates that the quality of our input for the learning model is good: the \bag s are promising resources for the relation clustering.

The baseline model cannot deal with other temporal heuristic. We will propose advanced models in the following sections.

\subsection{Pairwise Model}
\label{section_news_spike}
The temporal functionality heuristic suggests us to exploit the tenses of the relations; while the temporal burstiness heuristic suggests us to exploit the time series of the argument. A pairwise model could be designed to capture them: we could compare pairs of relations in the \bag, and predict each pair with regard to synonymous or non-synonymous.
Afterwards the output clusters could be generated according to some heuristic rules (\eg\ assuming transitivity among synonyms). Tenses of the relations and time series of the arguments are encoded as features. We call them {\em tense features} and {\em spike features} respectively in this paper. An example tense feature is whether one relation is past tense while the other relation is present tense; an example spike feature is the covariance of the time series.

It is noteworthy that the setting of the pairwise model could be similar to that of the paraphrasing techniques~\cite{dolan2005automatically,SocherEtAl2011:PoolRAE}, which examine two sentences and determine whether they are semantically equivalent. In section~\ref{empirical_study}, we will evaluate the effect of applying paraphrasing techniques for relation clustering.

\subsection{Joint Cluster Model}
\label{section_joint_cluster_model}
The pairwise model has several drawbacks: \textit{1)} it lacks the ability to handle constraints, such as the mutual exclusion constraint implied by one-mention per discourse heuristic; \textit{2)} some ad-hoc rules, instead of formal optimizations, is required to generate clusters containing more than two relations.

A common approach to overcome the drawbacks of the pairwise model and to combine the heuristics together is to introduce a joint cluster model, while heuristics are encoded as features and constraints. Data, instead of ad-hoc rules, indicates the relevance of different insights, which could be learned as parameters. The advantage of the joint model is analogy to that of the cluster-based approaches for Coreference Resolution. Researches on Coreference Resolution (CR): a joint model could better capture constraints on multiple variables and yield higher quality results than pairwise CR models~\cite{rahman2009supervised}.


%Hypothesis~\ref{hypo_three} implies a global constraint. Including the global constraint into the model is not that straightforward. A common approach is to introduce a joint model with hypotheses encoded as features and constraints. Data, instead of ad-hoc rules, indicates the relevance of different insights, which could be learned jointly as parameters.

We propose an undirected graphical model, \sys, which is able to cluster relations jointly. Constraints can be captured by means of the factors connecting multiple random variables. We will introduce random variables, the factors, the objective function, the inference algorithm and the learning algorithm in the following sections. Figure~\ref{graphmodel} shows an example model for \eec\ \mtt{(Armstrong, Livestrong, Oct 17)}.

\begin{figure}
\centering
\includegraphics[width=3.1in]{graphmodel.pdf}
\caption{an example model for \eec\ (Armstrong, Livestrong, Oct 17). $Y$ and $Z$ are binary random variables. $\Phi^Y$, $\Phi^Z$ and $\Phi^{\text{joint}}$ are factors. \mtt{be founder of} and \mtt{step down} come from article 1 while \mtt{give speech at}, \mtt{be chairman of} and \mtt{resign from} come from article 2.}
\label{graphmodel}
\end{figure}

\subsubsection{Random Variables}
For \bag\ $(a_1,a_2,t,\{r_1,\ldots r_m\})$, we introduce one event variable and $m$ relation variables. They are boolean valued. The event variable $Z^{(a_1,a_2,t)}$ indicates whether $(a_1,a_2,t)$ is a appropriate event for clustering. It is designed in accordance with hypothesis \ref{hypo_two}: for the \eec\ like (Barack Obama, the White House, Oct 17), $Z$ should be assigned to 0.

The relation variable $Y^r$ indicates whether relation $r$ describes the \eec\ $(a_1,a_2,t)$ or not. All event-describing relations with $Y^r=1$ could compose a cluster. For example, the assignments $Y^{\textit{step\ down}}=Y^{\textit{resign}}=1$ produce a relation cluster \mtt{\{step down, resign\}}.

\subsubsection{Factors and the Joint Distribution}
In this section, we introduce a conditional probability model defining a joint distribution over all of the event and relation variables. The joint distribution is a function over the {\it factors}. Our model contains the {\em event factors}, the {\em relation factors} and the {\em joint factors}.

The event factor $\Phi^Z$ is a log-linear function with spike features, used to distinguish appropriate events. A relation factor $\Phi^Y$ is also a log-linear function. It can be defined for the single relation variables (\eg\ $\Phi^Y_1$ in Figure \ref{graphmodel}) with features like if a relation phrase is coming from a clausal complement: relation phrases in clausal complement are less useful for clustering because they are often not describe a fact. For example in the sentence \mtt{He heard Romney had won the election}, the extraction (Romney, had won, the election) is not a fact at all. Relation factor can also be defined for the two relation variables (\eg\ $\Phi^Y_2$ in Figure~\ref{graphmodel}) with features capturing the pairwise evidence for clustering, such as if two relation phrases would have the same tense.

The joint factors $\Phi^{\text{joint}}$ are defined to apply global constraints. They play two roles in our model: \textit{1)} to satisfy the hypothesis \ref{hypo_two}, when the prediction of the event variable is false, the \eec\ is not appropriate for clustering so all relation variables should take a false value, and \textit{2)} to satisfy hypothesis \ref{hypo_three}, the relation variables from the same article take the truth value at most once.

We define the joint distribution for our model with above random variables and factors. Let $\mathbf{Y}=\{Y^{r_i}\}\mid_{i=1}^m$ be the vector of relation variables; let $\mathbf{x}$ be the features. The joint distribution is:
\begin{eqnarray*}
\lefteqn{p(Z = z, \mathbf{Y} = \mathbf{y}|\mathbf{x}; \Theta) \stackrel{\text{\tiny def}}{=} \frac{1}{Z_{x}}\Phi^{Z}(z,\mathbf{x}) }\\
    && \prod_d \Phi^{\text{joint}}(z,\mathbf{y}_d,\mathbf{x})
        \prod_{i,j} \Phi^Y (y_i,y_j,\mathbf{x})
\end{eqnarray*}
\noindent
where $\mathbf{y}_d$ indicates the subset of relation variables from a particular article $d$, the parameter vector $\Theta$ is the weight vector of the features in $\Phi^Z$ and $\Phi^Y$, which are log-linear functions, \eg
\[
\Phi^{Y}(y_i,y_j,\mathbf{x}) \stackrel{\text{\tiny def}}{=}
\exp \left( \sum_j \theta_{j} \phi_j(y_i,y_j,\mathbf{x}) \right)
\]
where $\phi_j$ is $j$th feature function.

The joint factors $\Phi^{\text{joint}}$ is used to apply the temporal burstiness heuristic and the one event-mention per discourse heuristic.
$\Phi^{\text{joint}}$ is zero when the \eec is not good for clustering but some $y^r=1$, or there are more than more $y^r=1$ for one article.
Formally, it is calculated as:
\[
\Phi^{\text{joint}}(z,\mathbf{y}_d,\mathbf{x})
\stackrel{\text{\tiny def}}{=}
\begin{cases}
0  & {\rm if} ~ z=0 \land \exists r : y^r = 1  \\
0  & {\rm if} ~ \sum_{y^r\in \mathbf{y}_d} y^r > 1  \\
1  & {\rm otherwise} \\
\end{cases}
\]








