
\section{Related Work}
The vast majority of paraphrasing approaches~\cite{androutsopoulos2009survey,madnani2010generating} could fall in two categories: computing similarity based on the distributional hypothesis on large unlabeled corpora; and creating paraphrases from parallel and comparable corpora.

{\bf Using Distribution Similarity: }
DIRT by Lin and Pantel ~\shortcite{lin2001discovery} employs mutual information statistics to compute the distribution similarity between relations represented in dependency paths.  Resolver~\cite{yates2009unsupervised} introduces a new similarity metric called the Extracted Shared Property (ESP), and conducts relation clustering by implementing a formal probabilistic model to merge ESP with surface string similarity.

Identifying the semantic equivalence of relation phrases is also named {\it relation discovery} or {\it unsupervised semantic parsing}. Common techniques for them usually do not compute the similarity explicitly but implicitly rely on the distributional hypothesis. Poon and Domingos ~\shortcite{poon2009unsupervised} proposes Unsupervised Semantic Parsing (USP) to cluster relations represented in dependencies of the tree fragments by repeatedly merging the relations having similar context. Yao \etal~\shortcite{yao2011structured,yaounsupervised} introduces generative models for relation discovery, while relations are represented by features extracted from their context. The relation-feature matrix is then put into LDAstyle algorithms in the same way as a traditional document-word matrix. Chen~\etal~\shortcite{chen2011domain} focuses on relation discovery in a particular domain, extends a generative model with meta-constraints from lexical, syntactic and discourse regularities.

%Methods of the distributional hypothesis often confuse synonyms with antonyms and causes with effects. It also requires massive statistic evidences to achieve reliable output.

In contrast to the approaches above, this work exploits the temporal hypotheses and mines news streams to produce high precision relation clusters. \sys\ could find relation phrases describing the same event and avoid errors such as confusing synonyms with antonyms and causes with effects. Moreover, unlike the distributional hypothesis, it does not require massive statistical evidences as input.


{\bf Using Comparable and Parallel Corpora: }
Comparable and parallel corpora, including the news streams and multiple translations of the same story, have been used to generate paraphrases, both sentential~\cite{barzilay2003learning,dolan2004unsupervised,shinyama2003paraphrase} and phrasal ~\cite{barzilay2001extracting,shen2006adding,pang2003syntax}. Common approaches are to first gather relevant articles and then pair sentences that have potential to be paraphrases.
Given a training set of paraphrases, models can be learned and applied for unlabeled pairs~\cite{dolan2005automatically,SocherEtAl2011:PoolRAE}. Phrasal paraphrases are often obtained by running an alignment algorithm over the paraphrased sentence pairs.

While prior works generally use the temporal dimensions of the news streams as the coarse filters, they still largely rely on the text metrics, such as context similarity and edit distance, to make predictions and alignments. These text metrics are not sufficient to produce high precision results; moreover they tend to produce paraphrases that are simple lexical variants (\eg\ {\it \{go to, go into\}.}). In contrast, we could generate relation clusters with both high precision and high diversity.

{\bf Others:}
Textual entailment~\cite{dagan2009recognizing}, which finds a phrase inferring another phrase, is very related to relation clustering task. Berant~\etal~\cite{berant2011global} is aware about the flaws in distributional similarity and propose to train local entailment classifiers, which are able to combine many features. Lin~\etal~\cite{lin2012no} also uses temporal information to detect semantics of entities. 


%Textual entailment~\cite{dagan2009recognizing}, which finds a phrase inferring another phrase, is also related to this paper. It is an interesting future work to investigate how temporal hypotheses can be used for textual entailment problems.

%The PASCAL Recognising Textual Entailment Challenge~\cite{dagan2009recognizing} proposes the task of recognizing when two sentences entail one another, given manually labeled training data, and many authors have submitted responses to this challenge.





