\section{Introduction}

Paraphrasing, the task of finding sets of semantically equivalent
surface forms, is crucial to many natural language processing
applications, including relation extraction~\cite{bhagat2008large},
question answering~\cite{faderparaphrase},
summarization~\cite{barzilay1999information} and machine
translation~\cite{callison2006improved}. While the benefits of
paraphrasing have been demonstrated, creating a large-scale corpus of
high-precision paraphrases remains a challenge --- especially for
relation phrases.

Many researchers have considered generating paraphrases by mining the
Web guided by the {\em distributional hypothesis}, which states that
words occurring in similar contexts tend to have similar
meanings~\cite{harris1954distributional}. For example,
DIRT~\cite{lin2001discovery} and Resolver~\cite{yates2009unsupervised}
identify synonymous relation phrases by their distributions of the
arguments. However, the distributional hypothesis has several
drawbacks. First, it can confuse antonyms with synonyms because
antonymous phrases appear in similar contexts as often as synonymous
phrases. For the same reasons, it also often confuses causes with
effects. For example, {DIRT} reports that the closest phrase to
\mtt{fall} is \mtt{rise}, and the closest phrase to \mtt{shoot} is
\mtt{kill}.\footnote{\url{http://demo.patrickpantel.com/demos/lexsem/paraphrase.htm}}
Second, the distributional hypothesis relies on statistics over large
corpora to produce accurate similarity statistics.  It remains unclear
how to accurately cluster less frequent relations with the
distributional hypothesis.

%Resolver only targets relations appearing at least 25 times.


Another common approach employs the use of parallel corpora. News
articles are an interesting target, because there often exist articles
from different sources describing the same daily events. This peculiar
property allows the use of the temporal assumption, which assumes that
phrases in articles published at the same time tend to have similar
meanings. For example, the approaches by
Dolan~\etal~\shortcite{dolan2004unsupervised} and
Barzilay~\etal~\shortcite{barzilay2003learning} identify pairs of
sentential paraphrases in similar articles that have appeared in the
same period of time. While these approaches use temporal information
as a coarse filter in the data generation stage, they still largely
rely on the text metrics in the prediction stage. This not only
reduces precision, but also limits the discovery of paraphrases with
dissimilar surface strings.

The goal of our research is to develop a technique for clustering
large numbers of relation phrases at high precision, using only
minimal human effort. The key to our approach is a joint cluster model
using the temporal dimensions of news streams, which allows us to
identify semantic equivalences of relation phrases at greater
precision. In summary, this paper makes the following contributions:

\bi
\item We formulate a set of three {\em temporal correspondence heuristics}
that characterize regularities over parallel news streams.

\item We develop a novel program, \sys, based on a probabilistic
  graphical model that jointly encoded these heuristics. We present
  inference and   learning algorithms for our model.
\item We present a series of detailed experiments demonstrating that
  \sys\ outperforms several competitive baselines, and show through
  ablation tests how each of the temporal heuristics affects
  performance.
\item To spur further research on this topic, we release both our
  generated paraphrase clusters and a corpus of 0.5M timestamped news
  articles, collected over a period of 50 days from hundreds of news
  sources.
    %To our knowledge, there exists no such dataset for academic use.
\ei





%                               OpenIE
%News Stream -----------------> shallow relation triples
%                                                                 | group by entity pairs
%                                                                V
%                                                      entities and relation phrases
%                                                      A,B: {r1,r2,r3,r4,r5,r6}
%                                                      C,D:...
%                                                      E,F:...
%
%news spike
%features
%             ------->             <----------
%                             JOINT
%                          INFERENCE
%                           based on
%                     temporal hypotheses
%                                |
%                               V
%                                                      entities with relation phrases for
%                                                          predominant relations
%                                                      A,B: {r1, r2, r3}
%                                                      C,D: {r1, r4, r5}
%                                                              |
%                                                              |
%                                                              V
%                                                      semantically equivalent
%                                                      relation phrases
%
%
%
%
%
%
%
%For each entity pair and each given day, there exists at most one binary relation that is predominant in the news.
%
%
%
%Assumption: Each article describes an event and for each article, there exists a predominant binary relation that describes the event.
%
%
%goal: predominant events and their relation phrase representation
%
%
%
%
%
%
%%http://thesaurus.com/browse/dominant?s=t
%% prevalent, predominant, prevailing, principal
%% dominant, salient, governing
%
%
%
%
%%Open Information Extraction system can produce argument-relation-argument tuples, \ie {\tt (Barrack Obama, was born in, Hawaii)}. A well-known limitation of open IE system preventing it from wider applications is that the relation phrase it extracted are extremely diversified: relation phrases pointing to the same relation or event are bearing different surface strings. In this work, we are trying to cluster relation cluster phrases denoting the same relation. Such synonymous relation phrases have great potential for downstream applications, \eg information extraction, question answering, machine translation.
%
%Clustering relations phrases is a fundamental problem in natural language processing. A high quality relation clustering system, which finds synonymous relations\footnote{As described in~\cite{bhagat2013paraphrase}, synonymous mean that replacing the relation phrase by another phrase in the appropriate context results in the same meaning sentence.}, could have great potential in downstream applications. In the case of Open Information Extraction, which produces argument-relation-argument tuples, a well-known limitation is that many synonymous relation phrases bear different surface strings. They cannot be treated as the same relation until we correctly cluster them. Statistical NLP techniques often use relation phrases as features for machine learning algorithms. But extremely high variation of the relation phrases often results in challenging high dimension learning problems.
%
%%In case of Relation Extraction(RE), which classify arguments into pre-defined relations, relation phrases are often represented as the syntactic dependency pathes between the arguments. They can be used as critical features for machine learning algorithms~\cite{mintz09}, but the syntactic variations in these pathes yield extremely high dimensional problems, which is very hard. In case of the event co-reference system, good relation clusters may be the key reason for success.
%
%The {\em Distributional Hypothesis}, which states that words occurring in similar contexts tend to have similar meanings~\cite{bhagat2013paraphrase}, is widely used in clustering algorithms. Many similarity metrics have been developed based on it. For example, the {\sc DIRT} system~\cite{lin2001discovery} computes the mutual information statistics to identify the similarity between relations represented in dependency paths. {Resolver}~\cite{yates2009unsupervised} uses a similarity measure called Extracted Shared Property to cluster relations on web-scale extractions. However, the distributional hypothesis has several weaknesses. It can confuse antonyms with synonyms, and causes with effects, because antonymous phrases appear in similar contexts equally as often as synonymous phrases. For example, {\sc DIRT} reports that the closest phrase to \mtt{fall} is \mtt{rise}, and the closest phrase to \mtt{shoot} is \mtt{kill} \footnote{\url{http://demo.patrickpantel.com/demos/lexsem/paraphrase.htm}}. Secondly, the distributional hypothesis relies on statistics over large corpora to produce accurate similarity statistics. It is unclear how to accurately cluster less frequent relations with the distributional hypothesis.
%
%%But long tail is a prominent phenomena that many relation phrases appear very few times even in web-scale dataset. It is very challenging for methods based on distributional hypothesis to design effective similarity metrics for these relation phrases. Hence, many works only report accuracy on their best predictions. The performance could drop sharply if users want more relation clusters than top ones.
%
%%Thirdly, it is unclear how these methods can incrementally produce new accurate clusters, because small amount of new inputs will not change the distribution largely, and the top clusters will keep the same.
%
%%Research also propose probabilistic models \eg LDA-based models~\cite{yao2011structured}, MLN-based models~\cite{poon2009unsupervised} to remove the syntactic variations of the relation phrases, and to get their semantics. These methods do not directly compute distribution similarity, but still depend on relation phrase and document co-occurrence to provide evidences. Thereby they also often suffer from the limitations of the distributional hypothesis.
%
%
%\begin{figure}
%\centering
%\includegraphics[width=3.1in]{systemoverview.pdf}
%\caption{System overview: exploiting temporal correspondence hypothesis to cluster relations in the news stream.
%%: given news stream, an IE system generates event candidates and relation candidates that probably describe the event; features and constraints are generated by temporal correspondence hypothesis and then used in a supervised model to build relation clusters.
%}
%\label{systemoverview}
%\end{figure}
%
%In this work, we exploit the temporal properties of news articles to produce high-precision relation clusters. The starting point is that news articles published at the same time tend to describe the same real-world events. So long as we can recognize a set of relation phrases describing a particular event, these relations should compose a high quality cluster. Figure~\ref{systemoverview} illustrates the overview of this work. We first apply an information extraction system to extract event candidates and also relations most likely describing the events. Then we propose three \temporal\ and based on them, a set of {\em news spike} features and constraints are exploited. Finally, a sequence of supervised learning models are investigated to capture these features and constraints to generate relation clusters. This work makes the following contributions:
%
%%Parallel corpus, such as different versions of translation or news articles, has been used to produce paraphrases.
%\bi
%    \item We propose three Temporal Correspondence Hypotheses that are useful for high-precision relation clustering.
%    \item We introduce a sequence of methods to exploit these Temporal Correspondence Hypotheses.
%    \item Experiments demonstrate that our system outperforms many competitive baselines, including the state of the art relation clustering techniques and sentence-level paraphrase detection techniques.
%    \item We will release a large scale timestamped news corpus containing news from hundreds of resources. To our best knowledge, there has been no such dataset for academic usage yet.
%\ei

