\section{Empirical Study}
\label{empirical_study}
We first introduce the experimental setup for our empirical study, and then we attempt to answer two questions in sections~\ref{section_eval_parallel} and~\ref{section_methods_of_distributional} respectively: First, does the \sys\ algorithm effectively exploit the proposed heuristics and outperform other approaches which also use news streams? Secondly, do the proposed temporal heuristics cluster relations with greater precision than the distributional hypothesis?


\subsection{Experimental Setup}
\label{section_data_generation}

Since we were unable to find any suitable time-stamped, parallel, news
corpus, we collected data using the following procedure:

\bi

\item Collect RSS news seeds, which contain the title, time-stamp, and
abstract of the news items.

\item Use these titles to query the Bing news search engine API and collect
additional time-stamped news articles.

\item Strip HTML tags from the news articles using
Boilerpipe~\cite{kohlschutter2010boilerplate}; keep only the title and
first paragraph of each article.

\item Extract shallow relation tuples using the OpenIE
system~\cite{fader11}.  \ei

We performed these steps every day from January 1 to February 22, 2013.  In
total, we collected 546,713 news articles, for which 2.6 million
extractions had 529 thousand unique relations. These led to 79,427 \bag s.

We used several types of features for clustering:
\textit{1)} spike features obtained from time series;
\textit{2)} tense features, such as whether two relation phrases are both in the present tense;
\textit{3)} cause-effect features, such as whether two relation phrases often appear successively in the news articles;
\textit{4)} text features, such as whether sentences are similar;
\textit{5)} syntactic features, such as whether a relation phrase appears in a clausal complement; and
\textit{6)} semantic features, such as whether a relation phrase contains negative words.

\subsection{Comparison with Methods using Parallel News Corpora}
\label{section_eval_parallel}
We evaluated \sys\ against other methods that also use time-stamped news.
These include the models mentioned in section \ref{section_temporal} and state-of-the-art paraphrasing techniques.


Human annotators created gold relation clusters for 500 \bag s; note that
some \bag s yield no gold cluster, since at least two synonymous phrases.
Precision and recall were computed by comparing an algorithm's output
clusters to the gold cluster of each \eec. We consider paraphrases with
minor lexical diversity, \eg\ \mtt{(go to, go into)}, to be of lessor
interest. Since counting these trivial paraphrases tends to exaggerate the
performance of a system, we also report precision and recall on {\em
diverse clusters} \ie, those whose relation phrases all have different head
verbs. Figure~\ref{f:pr_example} illustrates these metrics with an example;
note under our diverse metrics, all phrases matching \mtt{go\ *} count as
one when computing both precision and recall.  We conduct 5-fold cross
validation on our labeled dataset to get precision and recall numbers when
the system requires training.

{\begin{figure}[t]
\def\textexample#1{\it\scriptsize #1}
\renewcommand\arraystretch{1.3}
\centering
\small
\begin{tabular}{|>{\centering\arraybackslash}m{0.1\textwidth}|
>{\centering\arraybackslash}m{0.34\textwidth}|}\hline
%\begin{tabular}{c|c|c}\hline
\small
 $\textbf{output}$ &  \{\bmtt{go\ into}, \bmtt{go\ to},  \mtt{speak},  \mtt{return}, \bmtt{head\ to}\}\\\hline
 $\textbf{gold}$ &\{\bmtt{go\ into}, \bmtt{go\ to}, \mtt{approach}, \bmtt{head\ to}\}\\\hline
  $\textbf{gold}_{\textrm{div}}$ &\{\bmtt{go\ *}, \mtt{approach}, \bmtt{head\ to}\}\\\hline
  $\textbf{P/R}$ &
$\textrm{precision}=3/5$\
$\textrm{recall}=3/4$\\\hline
 $\textbf{P/R}_{\textrm{div}}$ &
$\textrm{precision}_{\textrm{div}}=2/4$\
$\textrm{recall}_{\textrm{div}}=2/3$\\\hline
\end{tabular}
\caption{\label{f:pr_example} an example pair of the output cluster and the gold cluster, and the corresponding precision recall numbers. }
\end{figure}
}

We compare \sys\ to the following approaches:

{\bf Baseline:} the model discussed in
Section~\ref{section_basic_model}. This system does not need any training,
and generates outputs with perfect recall.

{\bf Pairwise:} the pairwise model discussed in
Section~\ref{section_news_spike} and using the same set of features as used
by \sys. To generate output clusters, transitivity is assumed inside the
\bag. For example, when the pairwise model predicts that $(r_1,r_2)$ and
$(r_1,r_3)$ are both paraphrases, the resulting cluster is
$\{r_1,r_2,r_3\}$.


{\bf Paraphrase:} Socher~\etal~\shortcite{SocherEtAl2011:PoolRAE} achieved
the best results on the Dolan~\etal~\shortcite{dolan2004unsupervised}
dataset, and released their code and models. We used their off-the-shelf
predictor to replace the classifier in our Pairwise model. Given sentential
paraphrases, aligning relation phrases is natural, because OpenIE has
already identified the relation phrases.


\begin{table}[bt]
\begin{center}
\footnotesize
\begin{tabular}{|c|c|c|c|c|}
 \hline
 \multirow{2}{*}{ System} & \multicolumn{2}{c|}{ P/R }  & \multicolumn{2}{c|}{ $\textrm{P/R}_{\textrm{div}}$} \\
\cline{ 2-5}
         & ~prec~ & ~rec~  & ~prec~ & ~rec~ \\\hline
Baseline & 0.67 & 1.00 & 0.53 & 1.00\\
Pairwise & 0.90 & 0.60 & 0.84 & 0.35\\
Paraphrase & 0.81 & 0.37 & 0.68 & 0.31 \\\hline
%w/oSpike & 90.2 & 62.5 & 83.7 & 28.2\\\hline
\sys\ & \bf{0.92} & 0.60 & \bf{0.90} & 0.38\\
\hline
\end{tabular}
\end{center}
\caption{Comparison with methods using parallel news corpora} \label{t:compare_to_parallel}
\end{table}

Table~\ref{t:compare_to_parallel} shows precision and recall numbers.  It
is interesting that the basic model already obtains $0.67$ precision
overall and $0.53$ in the diverse condition. This demonstrates that the
\bag s generated from the news streams are a promising resource for
clustering.  Paraphrase performs better, but not as well as Pairwise or
\sys, especially in the diverse cases. This is probably due to the fact
that Paraphrase is purely based on text metrics and does not consider any
temporal attributes. Taking into account the features used by \sys,
Pairwise significantly improves the precision, which demonstrates the power
of our temporal correspondence heuristics.  Our joint cluster model, \sys,
which considers both temporal features and constraints, gets the best
performance in both conditions.

We conducted ablation testing to evaluate how spike features and tense
features, which are particularly relevant to the temporal aspects of news
streams, can improve performance. Figure~\ref{ablation} compares the
precision/recall curves for three systems in the diverse condition: (1)
\sys; (2) w/oSpike: turning off all spike features; and (3) w/oTense:
turning off all features about tense.  There are some dips in the curves
because they are drawn after sorting the predictions by the value of the
corresponding ILP objective functions, which do not perfectly reflect
prediction accuracy. However, it is clear that \sys\ produces greater
precision over all ranges of recall.


\begin{figure}
\centering
\includegraphics[width=3.1in]{ablation.pdf}
\caption{Precision recall curves on diverse cases for NewsSpike, w/oSpike and w/oTense}
\label{ablation}
\end{figure}




\subsection{Comparison with Methods using the Distributional Hypothesis}
\label{section_methods_of_distributional} We evaluated our model against
methods based on the distributional hypothesis. We ran \sys\ over all \bag
s except for the development set and compared to the following systems:

{\bf Resolver: } Resolver~\cite{yates2009unsupervised} uses a set of
extraction tuples in the form of $(a_1,r,a_2)$ as the input and creates a
set of relation clusters as the output\footnote{Resolver also produces
argument clusters, but this paper only evaluates relation
clustering}.\comment{Resolver applies a hybrid similarity metrics by
combining Extracted Shared Property similarity (each relation is
represented by a vector of argument pairs) and string similarity (each
relation is represented by a vector of its own words).} We evaluated
Resolver's performance with an input of the 2.6 million extractions
described in section \ref{section_data_generation}, using Resolver's
default parameters.


{\bf ResolverNYT: } Since Resolver is supposed to perform better when given
more accurate statistics from a larger corpus, we tried giving it more
data. Specifically, we ran ReVerb on 1.8 million NY Times articles
published between 1987 and 2007 obtain 60 million
extractions~\cite{sandhaus08}.  We ran Resolver on the union of this and
our standard test set, but report performance only on clusters whose
relations were seen in our news stream.

{\bf ResolverNytTop: } Resolver is designed to achieve good performance on
its top results. We thus ranked the ResolverNYT outputs by their scores and
report the precision of the top 100 clusters.

{\bf Cosine: } Cosine similarity is a basic metric for the distributional
hypothesis. This system employs the same setup as Resolver in order to
generate relation clusters, except that Resolver's similarity metric is
replaced with the cosine. Each relation is represented by a vector of
argument pairs. The similarity threshold to merge two clusters was 0.5.

{\bf CosineNYT: } As for ResolverNYT, we ran CosineNYT with an extra
60 million extractions and reported the performance on relations seen in
our news stream.




%{\bf CosineNYT: } Consider the size of extractions in our parallel corpus is comparably smaller, in this baseline we investigate the potential of distribution similarity by applying cosine similarity on a large set of extractions. We use the New York Times~\cite{sandhaus08} as the text set, which contains over 1.8 million news articles published between January 1987 and June 2007. We run Reverb over the text and get about 60 million extractions. We add them to the extractions of our parallel corpus. To make the numbers comparable, we subset the output relation clusters by only considering the relations seen in the parallel corpus.

We measured the precision of each system by manually labeling all output if
100 or fewer clusters were generated (\eg ResolverNytTop), otherwise 100
randomly chosen clusters were sampled. Annotators first determined the
meaning of every output cluster and then created a gold cluster by choosing
the correct relations\footnote{The gold cluster could be empty if the
output cluster was nonsensical}. Unlike many papers that simply report
recall on the most frequent relations, we evaluated the total number of
returned relations in the output clusters. As in
Section~\ref{section_eval_parallel}, we also report numbers for the case
of lexically diverse relation phrases.


%in the same way as Resolver~\cite{yates2009unsupervised}, a relation $r$ in the output cluster is labeled as correct if the majority of relations in that cluster are synonyms of $r$. The precision is the number of relations in the output clusters which are correct dividing the total number of relations in output clusters. Measure overall recall requires labeling all relations, which is impossible. The recall of Resolver is measured on 200 most frequent relations. We are interested in clustering all relations, more than most frequent ones. So we simply compare the total number of relations in output clusters for each system.

%We notice that clusters with relation phrases sharing the same head verb (\eg {\tt (convert to, convert into)} are very likely to be correct clusters. But simple clusters like this are less interesting, and counting them could exaggerate the precision of a system. In this paper, we include another metric by reporting numbers after ignoring all simple cases. We only take into account relations in an output cluster that do not share the majority head verb of that cluster. For example, in a cluster {\tt (convert to, convert into, turn into)}, only {\tt turn into} counts. To our best knowledge, we are the first paper to introduce this challenging metric.



\begin{table}[bt]
\begin{center}
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
 \hline
\multirow{2}{*}{ System} & \multicolumn{2}{c|}{ all }  & \multicolumn{2}{c|}{ diverse } \\
\cline{ 2-5}
         & ~prec~ & \#rels & ~prec~ & \#rels \\\hline
{ Resolver} & 0.78 & 129 & 0.65 & 57 \\
{ ResolverNyt} & 0.64 & 1461 & 0.52 & 841 \\
{ ResolverNytTop} & 0.83 & 207 & 0.72 & 79 \\
{Cosine} & 0.65 & 17 & 0.33 & 9 \\
{CosineNyt} & 0.56 & 73 & 0.46 & 59 \\\hline
\sys\ & \bf{0.93} & 21472 & \bf{0.87} & 5368 \\
\hline
\end{tabular}
\end{center}
\caption{Comparison with methods using the distributional hypothesis} \label{t:compare_to_distributional}
\end{table}

As can be seen in Table~\ref{t:compare_to_distributional}, \sys\ outperformed methods based on the distributional hypothesis. The performance
of the Cosine and CosineNyt was very low, suggesting that simple similarity
metrics are insufficient for handling the relation clustering problem, even
when large-scale input is involved. Resolver and ResolverNyt employ an
advanced similarity measurement and achieve better results. However, it is
surprising that Resolver results in a greater precision than
ResolverNyt. It is possible that argument pairs from news streams spanning
20 years sometimes provide incorrect evidence for clustering. For example,
there were extractions like \mtt{(the Rangers, be third in, the NHL)} and
\mtt{(the Rangers, be fourth in, the NHL)} from news in 2007 and 2003
respectively. Using these phrases, ResolverNyt produced the incorrect
cluster \mtt{\{be third in, be fourth in\}}. \sys\ achieves greater
precision than even the best results from ResolverNytTop, because \sys\
successfully captures the temporal heuristics, and does not confuse
synonyms with antonyms, or causes with effects.  \sys\ also returned on
order of magnitude greater number of relations than other
methods.\comment{This is because given two relations in news streams,
conclusions can only be drawn from distributional similarities when given
large amounts of shared extractions; this is not common even given the
whole NY Times corpus as input. When distributional similarities are
confronted with insufficient statistical evidences, \sys\ can still predict
many relations correctly by exploiting the strength of temporal
information.}

The heuristics and models in this paper are proposed for high precision. To compare with the distributional hypothesis, we purposely forced \sys\ not to rely on any distribution similarity. But \sys's graphical model has the flexibility to incorporate any similarity metrics as features. Such a hybrid model has great potential to increase recall, which is one goal for future work.


%Within a single EEC, rule out a lot of bad hypotheses, disagree in tense, draw strong conclusion from small amount of data, dis sim needs a lot of data to draw any conclusion.




%Comparing the numbers of { ResolverNyt} and {ResolverNytTop}, we find that the top results of { ResolverNyt} are good but the precision quickly diminishes on the less confident outputs.

%It demonstrates the drawback of the distributional hypothesis with regards to the less frequent relations, in such cases the system is not capable of obtaining sufficient statistical evidences. \sys\ captures the temporal hypotheses and achieves greater precision than ResolverNytTop, it has also returned more clusters than {ResolverNyt}.


%\subsection{Discussion}


%{\bf Resolver} using extractions from news articles during one month does not heavily suffer from such errors because there are few input extractions sharing arguments but having opposite meanings.



%Table~\ref{t:compare_to_distributional} shows that \sys\ significantly outperforms methods of distributional hypothesis.  {\bf Cosine} and {\bf CosineNyt} performs very bad. It suggests that simple distributional similarity metric is far from solving relation clustering problem. {\bf Resolver} and {\bf ResolverNyt} achieve better results. But it is surprising that {\bf ResolverNyt} get higher precision than {\bf Resolver}, although {\bf ResolverNyt} takes 60 millions extra extractions as the input. A possible reason is that the quality of the extra input is much lower. For example there are extractions like \texttt{(the Rangers, be third in, the NHL)} and \texttt{(the Rangers, be forth in, the NHL)} from news in 2007 and 2003 respectively. Extractions like this yield incorrect output cluster \texttt{be third in / be forth in}. {\bf Resolver} using extractions from news articles during one month does not heavily suffer from such errors because there are few input extractions sharing arguments but bearing different or even opposite meanings. Comparing the numbers of {\bf ResolverNyt} and {\bf ResolverNytTop}, we find that the top results of {\bf ResolverNyt} are much better but the precision drops quickly on less confident outputs. It shows that distributional hypothesis is weak on less frequent relations, on such cases not enough evidence is obtained.

%Numbers above show that distributional hypothesis is less robust than one may think of. The error analysis also suggest that temporal information could be very useful. \sys\ captures \temporal\ achieve  higher precision than {\bf ResolverNytTop} and more returns than {\bf ResolverNyt}.

%\subsection{Error Analysis}




% It achieves higher precision than {\bf ResolverNytTop} and more returns than {\bf ResolverNyt}.
%
%It is surprising that {\bf Resolver} get better precision than {\bf ResolverNyt}, which take 60 millions extractions as the extra input. Lower quality input is the reason: for example there are extractions like \texttt{(the Rangers, be third in, the NHL)} and \texttt{(the Rangers, be forth in, the NHL)} from news in 2007 and 2003 respectively. Such extractions yield incorrect output \texttt{be worse / be best}. {\bf Resolver} using extractions from parallel news does not heavily suffer from such errors because the input quality is higher. The above example also shows the importance of using temporal evidences.
%
%
%there are too many noise in large scale extractions so the problem of confusing antonyms and synonyms becomes much sever.
%
%The top results of {\bf Resolver} are good its performance drops quickly


%{\bf Cosine: } In this baseline we use the same framework as Resolver but replace the similarity metric with cosine similarity. The input is the extractions from the parallel corpus.
%
%{\bf CosineNYT: } Similar to ResolverNYT, we add 60 million extractions from New York Time corpus to the input.




%\subsection{Experimental Setup}
%
%%{\bf Input: } with streams of parallel news articles, we identify the set of $(a_1,r,a_2)$ tuples from sentences of these articles by out-of-shelf open information extraction system Reverb~\cite{Fader11}.
%
%{\bf Input: } As we discussed, the News Spike algorithm models the potential event $(a_1,a_2,t)$ and whether the relation phrases $r$ in the form of $(r,a_1,a_2,t)$ is describing the event.
%%Given the news articles with time-stamps, instances of our model are generated by taking in all argument pairs whenever the Open Information Extraction system tells us there is a tuple $(a_1,r,a_2)$; and
%We are more interested in precision in this paper so we setup day-by-day time intervals. (\ie $t$ will be a day like Dec14. We only cluster relation phrases when they are from the news on the same day).
%
%%Every sentence in one specific bag share the same mentions of argument pairs.
%{\bf Output: } For each instance, we output a relation cluster by putting relation phrases of those true relation variable. We ignore those cluster containing zero or only one relation phrase.
%
%%For each instance, identify whether there is an event and whether each relation phrase is a good representation. It could happen that all mentions are identified as false simply because there does not exist any significant event for the argument pair of that instance.
%
%%{\bf Supervision: } To collection training data and to evaluate the performance, we randomly sample some bags and manually annotate all tuples in these bags.
%
%{\bf Evaluation Metrics: } a phrase $r$ in $(r,a_1,a_2,t)$ is manually labeled as positive if $r$ describe the event of $(a_1,a_2,t)$, otherwise it is labeled as negative. Let $R$ denotes the set of inferred positive relations by the model whereas $R\gold$ denotes the set of positive relation labeled by human beings. Then precision and recall can be computed in standard way.
%
%%Yates~\shortcite{yates2009unsupervised} design the recall metrics for only top clusters. In this work, we care all output produced by our system. So we will report the total number of output relation clusters of different systems. We also report the number of cases when they are antonyms. We also report pairwise precision and recall.
%
%Since annotators cannot create the complete relation clusters, one cannot directly compare recall numbers between \temporal\ systems and distributional similarity systems. We report the number of relations in clusters as the evaluation metric.
%
%%as the degree that how far one system reaches the limitation of its design. For example, Yates~\shortcite{yates2009unsupervised} design the recall metrics for only top clusters.
%
%\subsection{Baseline systems}
%
%%{\bf distribution similarity: } we compute the similarity between all pairs of relation phrases by two measures: cosine similarity and ESP proposed by Yates. We look at the 1000 pairs with highest scores and manually mark whether they are synonymous.
%
%{\bf Resolver: } Resolver is the state-of-the-art relation clustering system based on computing distribution similarity. It proposed the ESP probability model, and the idea of co-clustering arguments and relation phrases. Resolver is a complicated system with many processing and parameters. We implement a basic version by following core algorithm in~\cite{yates2009unsupervised} \footnote{The code to compute ESP similarity is provided by the author of Resolver}. We apply it on our datasets, which contains 1.2 millions tuples.
%
%{\bf Resolver+WS: } We want to fully exploit the potential of distribution similarity by applying Resolver on a web scale dataset and compare with our work. We use the New York Times~\cite{sandhaus08} as the text set, which contains over
%1.8 million news articles published between Jan. 1987 and Jun. 2007. We run Open IE on all the text and get about 60 million extractions. To make the numbers comparable, we apply Resolver to only cluster the relations appearing in our parallel dataset.
%
%{\bf LDA:} ~\cite{yao2011structured} propose a LDA-style approach to identify instances of some specific relations. We adopt their idea here as a baseline system: we treat the co-occurrence of relation phrases and argument objects as a word-document matrix; then an LDA was running on this matrix to cluster relation phrases into its hidden topics.
%
%{\bf DIRT:} Following the work of DIRT, we randomly pick 100 relation phrases in our bags, submit it to the DIRT system and manually label the top
%
%{\bf w/o News Spike: } remove the news spike features of \sys\ .
%
%{\bf w/o joint inference} Remove the joint inference component of \sys\ . So we simply conduct a pairwise classification.

%{\bf w/o joint inference w/ post-processing} In order to show the advantage of joint inference, we conduct a series of post-processing on the output of w/o joint inference system. We firstly ... secondly ... thirdly...

%\subsection{Performance of News Spike}
%We test several variations of News Spike model. In this group of experiments, we randomly sample 300 argument pairs, build the News Spike model, and manually assign values to all clustering variables. So we can evaluate both precision and recall here.
%
%\begin{table*}[bt]
%\footnotesize
%\begin{center}
%\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
% \hline
%\multirow{2}{*}{ System} & \multicolumn{2}{c|}{ ind. }  & \multicolumn{2}{c|}{ w/o simple } & \multicolumn{2}{c|}{ pairs }  & \multicolumn{2}{c|}{ pairs w/o simple }\\
%\cline{ 2-9}
%         & accuracy & \#clusters & accuracy & \#clusters
%         & accuracy & \#pairs & accuracy & \#pairs  \\\hline
%csm & 0.50 & 1 & 0.46 & 1 & 0.50 & 1 & 0.46 & 1\\
%csm Nyt & 0.50 & 1 & 0.46 & 1 & 0.50 & 1 & 0.46 & 1\\
%Resolver & 0.50 & 1 & 0.46 & 1 & 0.50 & 1 & 0.46 & 1\\
%Resolver Nyt & 0.50 & 1 & 0.46 & 1 & 0.50 & 1 & 0.46 & 1\\
%Resolver Nyt Top & 0.50 & 1 & 0.46 & 1 & 0.50 & 1 & 0.46 & 1\\
%\sys\ & 0.50 & 1 & 0.46 & 1 & 0.50 & 1 & 0.46 & 1\\
%\hline
%\end{tabular}
%\end{center}
%\caption{Performance of News Spike} \label{t:twtrandom}
%\end{table*}
%
%It can be seen clearly that News Spike algorithm significantly improve the precision of the relation clustering algorithm, which is the major goal of this work. Moreover, the news spike features and joint inference boost the precision a lot. It is interesting to see the naive baseline achieve 50\% precision, which means
%
%
%
%\subsection{Performance of Distribution Similarity}
%We evaluate the precision of returned relation clusters. Note that sometimes the cluster is composed of relation phrases with the same head verbs, \eg {\tt stay in} and {\tt stay at}. Recall the goal of relation clustering is to remove the syntactic variations, so it is more interesting to see how system performs on phrases with different head verbs.
%
%We manually annotate all output clusters of Resolver; we random sample 100 output clusters of ResolverWS and annotate them. We also annotate the top 100 clusters of ResolverWS.
%
%\begin{table}[bt]
%\footnotesize
%\begin{center}
%\begin{tabular}{|c|c|c|c|c|}
% \hline
%\multirow{2}{*}{ System} & \multicolumn{2}{c|}{ all}  & \multicolumn{2}{c|}{ different head verbs } \\
%\cline{ 2-5}
%         & Precision & \#clusters & Precision & \#clusters \\
%\hline
%Resolver   &   92    &   153    &  85 &   61 \\
%ResolverWS & 63 & 4287 & 51 & 3287\\
%\hline
%\end{tabular}
%\end{center}
%\caption{Performance of distribution similarity} \label{t:twtrandom}
%\end{table}
%
%Resolver using our parallel news corpus is getting pretty good precision (while the number of returned cluster is very small). But it is surprising to see that Resolver on the whole New York Times is getting poor performance. The reason is that the distribution of two relation phrases are similar to each other but their shared argument pairs are not informative at all. For example, relation phrases {\tt range from} and {\tt start at} are sharing following argument pairs:
%
%\begin{table}[bt]
%\begin{center}
%\begin{tiny}
%\begin{tabular}{c}
%\hline
%(Admission prices, \$ 4)(Their main obstacle, tiny ink and chalk sketches) \\
%(Double rooms, \$ 36) (the moods, idyllic to bumptious) (Prices, \$ 190,000)   \\
%(New York City public school teachers, \$ 31,910) (Prices, \$ 195) (prices, \$ 195) \\
%(Admission prices, \$ 10) (Prices, \$ 1,950) (Double rooms, \$ 210)  (Prices, \$ 19)\\
%(Prices, \$ 190) (The work, inflected images)  (The Early Rounds prices, \$ 2,495)\\
% (Double rooms, \$ 295) (Double rooms, \$ 350) (Double rooms, \$ 270)(the third tier, \$ 6.50) \\
%\hline
%\end{tabular}
%\end{tiny}
%\end{center}
%\caption{\sys\ shared argument pairs for relation phrase {\tt range from} and {\tt start at}} \label{t:matching}
%\end{table}
%
%Resolver on our parallel dataset achieves high accuracy because the argument pairs are more likely to point to certain news event than normal unlabeled dataset. (Recall we are querying news titles on search engine to collect the dataset. So argument pairs about event may occur many times.) In this case, argument pairs are providing stronger evidence. So when two relation phrase share many argument pairs, it is likely that they are real synonyms.
%
%
%\subsection{Performance of LDA-style algorithms}
%Have not been done yet.


%\subsection{Analysis}
%We need some numbers to validate our claims:
%\bi
%\item How many new relation clusters can we get everyday? the accuracy?
%\item Do we still suffer from confusing antonyms with synonyms?
%\item can we achieve good accuracy on low frequent relation phrases?
%\ei


%%
%%
%%\section{Experimental Setup}
%%
%%We follow the approach of Riedel~\etal~\shortcite{riedel-ecml10} for
%%generating weak supervision data, computing features, and evaluating
%%aggregate extraction.
%%
%%We also introduce new metrics for measuring sentential extraction
%%performance, both relation-independent and relation-specific.
%%
%%\subsection{Data Generation}
%%
%%We used the same data sets as Riedel~\etal~\shortcite{riedel-ecml10} for
%%weak supervision.  The data was first tagged with the Stanford NER
%%system~\cite{finkel-acl05} and then entity mentions were found by
%%collecting each continuous phrase where words were tagged identically (\ie,
%%as a person, location, or organization).  Finally, these phrases were
%%matched to the names of Freebase entities.
%%
%%Given the set of matches, define $\Sigma$ to be set of NY Times sentences
%%with two matched phrases, $E$ to be the set of Freebase entities which were
%%mentioned in one or more sentences, $\Delta$ to be the set of Freebase
%%facts whose arguments, $e_1$ and $e_2$ were mentioned in a sentence in
%%$\Sigma$, and $R$ to be set of relations names used in the facts of
%%$\Delta$.  These sets define the weak supervision data.
%%
%%\subsection{Features and Initialization}
%%
%%We use the set of sentence-level features described by
%%Riedel~\etal~\shortcite{riedel-ecml10}, which were originally developed by
%%Mintz~\etal~\shortcite{mintz-acl09}.  These include indicators for various
%%lexical, part of speech, named entity, and dependency tree path properties
%%of entity mentions in specific sentences, as computed with the Malt
%%dependency parser~\cite{nivre04} and OpenNLP POS
%%tagger\footnote{http://opennlp.sourceforge.net/}.  However, unlike the previous work, we did not make
%%use of any features that explicitly aggregate these properties across
%%multiple mention instances.
%%
%%The \sys\ algorithm has a single parameter $T$, the number of training iterations, that must
%%be specified manually. We used $T=50$ iterations, which performed best in development
%%experiments.
%%
%%
%%\subsection{Evaluation Metrics}
%%\label{s:metrics}
%%
%%%We use evaluation metrics designed to test accuracy at the aggregate level, by evaluating the set of facts that are extracted, and at the sentence level, by evaluating the quality of individual extractions.
%%
%%Evaluation is challenging, since only a small percentage (approximately 3\%)
%%of sentences match facts in Freebase,
%%and the number of matches is highly unbalanced across relations, as we will see in more detail later. We
%%use the following metrics.
%%
%%\paragraph{Aggregate Extraction} Let $\Delta^e$ be the set of extracted
%%relations for any of the systems; we compute aggregate precision and recall
%%by comparing $\Delta^e$ with $\Delta$.
%%  This metric is easily computed but underestimates
%%extraction accuracy because Freebase is incomplete and some true relations
%%in $\Delta^e$ will be marked wrong.
%%
%%\paragraph{Sentential Extraction}  Let $S^e$ be the sentences where some
%%system extracted a relation and $S^F$ be the sentences that match the
%%arguments of a fact in $\Delta$.  We manually compute sentential extraction
%%accuracy by sampling a set of 1000 sentences from $S^e \cup S^F$ and
%%manually labeling the correct extraction decision, either a relation $r \in
%%R$ or $none$.   We then report precision and recall for each system on this
%%set of sampled sentences.   These results provide a good approximation to
%%the true precision but can overestimate the actual recall, since we did not
%%manually check the much larger set of sentences where no approach predicted
%%extractions.
%%
%%
%%\subsection{Precision / Recall Curves}
%%To compute precision / recall curves for the
%%tasks, we ranked the \sys\ extractions as follows.   For sentence-level evaluations, we
%%ordered according to the extraction factor score $\Phi^{\text{extract}}(z_i,x_i)$. For aggregate
%%comparisons, we set the score for an extraction $Y^r=true$ to be the max of the extraction
%%factor scores for the sentences where $r$ was extracted.
%%
%%
%%
%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%\section{Experiments}
%%\label{s:exp}
%%
%%To evaluate our algorithm, we first compare it to an existing approach for using multi-instance learning with weak supervision~\cite{riedel-ecml10}, using the same data and features.   We report both aggregate extraction and sentential extraction results.
%%We then investigate relation-specific performance of our system. Finally, we report running time comparisons.
%%%We then describe a simple method for improving performance with better data generation and compare running times of the various approaches.
%%
%%%The \sys\ extractions were done, as described in Section~\ref{s:inf}, by
%%%independently extracting the highest scoring relation for each sentence,
%%%and using all of the extracted facts for aggregate level prediction.
%%
%%\begin{figure}[bt]
%%\hspace*{-10pt}
%%\includegraphics[width=3.3in]{Relation.pdf}
%%%\vspace*{1.5in}
%%\vspace{-24pt}
%% \caption{Aggregate extraction precision / recall curves for
%%   Riedel~\etal~\shortcite{riedel-ecml10}, a reimplementation of that
%%   approach ({\bf SoloR}), and our algorithm ({\bf MultiR}).}
%%   \label{f:agg}
%%\end{figure}
%%
%%\subsection{Aggregate Extraction}
%%
%%Figure~\ref{f:agg} shows approximate precision / recall curves for three
%%systems computed with aggregate metrics (Section~\ref{s:metrics}) that test
%%how closely the extractions match the facts in Freebase.  The systems
%%include the original results reported by
%%Riedel~\etal~\shortcite{riedel-ecml10} as well as our new model (\sys).  We
%%also compare with {\bf SoloR}, a reimplementation of their algorithm, which we
%%built in Factorie~\cite{mccallum2009factorie}, and will use later to evaluate
%%sentential extraction.
%%
%%\sys\ achieves competitive or higher precision over all ranges of recall,
%%with the exception of the very low recall range of approximately 0-1\%.  It also
%%significantly extends the highest recall achieved, from 20\% to 25\%, with
%%little loss in precision.  To investigate the low precision in the 0-1\% recall
%%range, we manually checked the ten highest confidence extractions
%%produced by \sys\ that were marked wrong. We found that all
%%ten were true facts that were simply missing from Freebase.  A manual evaluation, as we perform next
%%for sentential extraction, would remove this dip.
%%
%%\subsection{Sentential Extraction}
%%
%%
%%%\begin{table*}[bt]
%%%\centering
%%%\small{
%%%\begin{tabular}{|l|r|r||l|l||l|l||l|l|}
%%%\hline
%%%\multicolumn{1}{|c|}{Relation} & \multicolumn{2}{c||}{ Overlaps} & \multicolumn{2}{c||}{ Baseline1 } & \multicolumn{2}{c||}{ Baseline2 } & \multicolumn{2}{c|}{\sys } \\
%%%  & \#pairs & \#snts & $\tilde{P}_m$ & $\tilde{R}_m$ & $\tilde{P}_m$ & $\tilde{R}_m$ & $\tilde{P}_m$ & $\tilde{R}_m$ \\
%%%\hline
%%%/business/person/company                      &  1 &   2 & .971& .371& .941 & .360 & .74  & .427 \\
%%%/people/person/place\_lived                   & 63 &  97 & .833& .083& .833 & .083 & .87  & .083 \\
%%%/location/location/contains                   & 96 & 708 & 1   & .255& 1    & .51  & .8   & .59  \\
%%%/business/company/founders                    &  0 &   0 & .889& .174& .667 & .13  & .667 & .13  \\
%%%/people/person/nationality                    &  3 &   4 & 1   & .146& 1    & .22  & .59  & .22  \\
%%%/location/neighborhood/neighborhood\_of       &  0 &   0 & 1   & .111& 1    & .111 & .469 & .111 \\
%%%/people/person/children                       &  0 &   0 & 1   & .083& 1    & .125 & .786 & .125 \\
%%%/people/deceased\_person/place\_of\_death     & 19 &  36 & 1   & .2  & 1    & .267 & .905 & .267 \\
%%%/people/person/place\_of\_birth               & 59 &  82 & 1   & .083& 1    & .083 & .286 & .25  \\
%%%/business/company/advisors                    &  0 &   0 & N/A & 0   & N/A  & 0    & N/A  & 0    \\
%%%/location/country/administrative\_divisions   & 60 & 412 & N/A & 0   & N/A  & 0    & N/A  & 0    \\
%%% other relations with lots of overlaps: /location/country/capital (487), /location/us_state/capital (39) ...
%%%  Overlapping combinations (%pairs, %snts):
%%%1       1       /people/person/place_of_birth,/people/deceased_person/place_of_burial,/people/person/place_lived
%%%42      145     /location/location/contains,/location/country/administrative_divisions
%%%14      221     /location/location/contains,/location/country/capital
%%%2       11      /location/province/capital,/location/location/contains
%%%9       39      /location/location/contains,/location/us_state/capital
%%%3       3       /people/person/place_lived,/people/person/place_of_birth,/people/deceased_person/place_of_death
%%%1       2       /business/person/company,/people/person/nationality
%%%17      266     /location/location/contains,/location/country/capital,/location/country/administrative_divisions
%%%9       23      /location/location/contains,/location/us_county/county_seat
%%%1       1       /people/person/place_lived,/people/person/nationality
%%%24      28      /people/person/place_of_birth,/people/person/place_lived
%%%24      40      /people/person/place_lived,/people/person/place_of_birth
%%%10      24      /people/person/place_lived,/people/deceased_person/place_of_death
%%%1       1       /location/location/contains,/base/locations/countries/states_provinces_within,/location/country/administrative_divisions
%%%2       2       /location/location/contains,/location/br_state/capital
%%%1       1       /people/person/place_of_birth,/people/person/nationality
%%%6       9       /people/person/place_of_birth,/people/deceased_person/place_of_death
%%
%%% Overlapping relations (%pairs, %snts):
%%%96      708     /location/location/contains
%%%59      82      /people/person/place_of_birth
%%%63      97      /people/person/place_lived
%%%9       23      /location/us_county/county_seat
%%%1       1       /people/deceased_person/place_of_burial
%%%3       4       /people/person/nationality
%%%1       1       /base/locations/countries/states_provinces_within
%%%2       11      /location/province/capital
%%%1       2       /business/person/company
%%%31      487     /location/country/capital
%%%2       2       /location/br_state/capital
%%%60      412     /location/country/administrative_divisions
%%%19      36      /people/deceased_person/place_of_death
%%%9       39      /location/us_state/capital
%%
%%
%%
%%%\hline
%%%\end{tabular}
%%%}
%%%\caption{Number of overlapping matches to Freebase, as well as estimated precision and recall for \sys\ and two baselines. }
%%%\label{t:sent2}
%%%\vspace{-8pt}
%%%\end{table*}
%%
%%%Baseline1 overall: pr .977, re .224
%%%Baseline2 overall: pr .981, re .380
%%
%%
%%
%%
%%\begin{figure}[bt]
%%\hspace*{-10pt}
%%\includegraphics[width=3.3in]{Sent.pdf}
%%%\vspace*{1.5in}
%%\vspace{-24pt}
%% \caption{Sentential extraction precision / recall curves for \sys\ and
%%   {\bf SoloR}.}
%%   \vspace{-5pt}
%%   \label{f:sent}
%%\end{figure}
%%
%%Although their model includes variables to model sentential extraction,
%%Riedel~\etal~\shortcite{riedel-ecml10} did not report sentence level
%%performance.  To generate the precision / recall curve we used the joint
%%model assignment score for each of the sentences that contributed to
%%the aggregate extraction decision.
%%
%%
%%Figure~\ref{f:agg} shows approximate precision / recall curves for
%%\sys\ and {\bf SoloR} computed against manually generated sentence labels, as defined in Section~\ref{s:metrics}.   \sys\ achieves significantly higher recall with a consistently high level of precision.    At the highest recall point, \sys\ reaches 72.4\% precision and 51.9\% recall, for
%%an F1 score of 60.5\%.
%%
%%%\begin{itemize}
%%%\item say how we sampled up to 100 for each relation
%%%\item if we did not reweight (then bias towards performance of dominant relation /location/location/contains, then precision 98.2\% at recall 43.4\%.
%%%\item how we extended the curve
%%%\end{itemize}
%%
%%\subsection{Relation-Specific Performance}
%%
%%Since the data contains an unbalanced number of instances of each relation, we
%%also report precision and recall for each of the ten most frequent relations.
%%Let $S^{M}_r$ be the sentences where \sys\ extracted an instance of relation
%%$r \in R$, and let $S^F_r$ be the sentences that match the arguments of a
%%fact about relation $r$ in $\Delta$. For each $r$, we sample 100 sentences
%%from both $S^{M}_r$ and $S^F_r$ and manually check accuracy. To estimate
%%precision $\tilde{P}_r$ we compute the ratio of true relation mentions in
%%$S^{M}_r$, and to estimate recall $\tilde{R}_r$ we take the ratio of true relation mentions
%%in $S^F_r$ which are returned by our system.
%%
%%\begin{table*}[bt]
%%\begin{small}
%%\begin{center}
%%\begin{tabular}{|c|r|c||l|l|}
%%\hline
%%\multirow{2}{*}{Relation} & \multicolumn{2}{c||}{Freebase Matches} & \multicolumn{2}{c|}{ \sys} \\
%%         & ~\#sents~~ & \% true & $\tilde{P}$ & $\tilde{R}$ \\
%%\hline
%%/business/person/company                     & 302  & 89.0  & 100.0  & 25.8 \\
%%/people/person/place\_lived                  & 450  & 60.0   & ~~80.0  & ~~6.7 \\
%%/location/location/contains                  & 2793 & 51.0  & 100.0  & 56.0  \\
%%/business/company/founders                  & 95   & 48.4 & ~~71.4 & 10.9  \\
%%/people/person/nationality                   & 723  & 41.0  & ~~85.7  & 15.0  \\
%%/location/neighborhood/neighborhood\_of      & 68   & 39.7 & 100.0 & 11.1 \\
%%/people/person/children                       & 30   & 80.0   & 100.0 & ~~8.3 \\
%%/people/deceased\_person/place\_of\_death     & 68   & 22.1 & 100.0 & 20.0 \\
%%/people/person/place\_of\_birth              & 162  & 12.0  & 100.0 & 33.0  \\
%%/location/country/administrative\_divisions  & 424  & ~~0.2 & N/A  & ~~0.0    \\
%%\hline
%%\end{tabular}
%%\end{center}
%%\end{small}
%%\caption{Estimated precision and recall by relation, as well as the number of matched sentences (\#sents) and accuracy (\%~true) of matches between sentences and facts in Freebase. }
%%\label{t:sent}
%%\end{table*}
%%
%%Table~\ref{t:sent} presents this approximate precision and recall for \sys\ on each of the relations,
%%along with statistics we computed to measure the quality  of the weak supervision.
%%Precision is high for the majority of relations but recall is consistently lower.
%%%However, the matches to Freebase, too, have low recall (25.7\%\footnote{On a sample of 2000 annotated
%%%sentences, we found that match recall is about 25.7\% and precision around 41.3\%.}).
%%We also see that the Freebase matches are highly skewed in quantity
%%and can be low quality for some relations,
%%with very few of them actually corresponding to true extractions.
%%The approach generally performs best on the relations with a sufficiently large number of true matches,
%%in many cases even achieving precision that outperforms the accuracy of the heuristic matches, at reasonable
%%recall levels.
%%
%%\subsection{Overlapping Relations}
%%
%%Table~\ref{t:sent} also highlights some of the effects of learning with overlapping relations.  For example, in the data, almost all of the matches for the administrative\_divisions relation overlap with the contains relation, because they both model relationships for a pair of locations.  Since, in general, sentences are much more likely to describe a contains relation, this overlap leads to a situation were almost none of the administrate\_division matches are true ones, and we cannot accurately learn an extractor. However, we can still learn to accurately extract the contains relation, despite the distracting matches.   Similarly, the place\_of\_birth and place\_of\_death relations tend to overlap, since it is often the case that people are born and die in the same city.   In both cases, the precision outperforms the labeling accuracy and the recall is relatively high.  %However,  the birth place is much harder to learn to predict overall, given the  low match accuracy.
%%
%%To measure the impact of modeling overlapping relations, we also evaluated a simple, restricted baseline.  Instead of labeling each entity pair with the set of all true Freebase facts, we created a dataset where each true relation was used to create a different training example. Training \sys\ on this data simulates effects of conflicting supervision that can come from not modeling overlaps.   On average across relations, precision increases 12 points but recall drops 26 points, for an overall reduction in F1 score from 60.5\% to 40.3\%.
%%%the recall for this baseline drops approximately 20 points to 22.4\% and the precision drops almost 5 points to 72.9\%.
%%% for baseline 2 F1 score reaches .56.
%%
%%
%%%\begin{itemize}
%%%\item challenge, vast number of NAs: in fact, 96.9\% of entity pairs do
%%%not have any matches to Freebase
%%%\item quality of the matches: on a sample of about 2000 sentences, we found that
%%%across relations matching recall only around 25.7\%, matching precision around 41.3\%
%%%\item evaluating relation-specific quality based on a manually annotated sample of the test set unpractical, due to small number of mentions with relations
%%%\item instead we approximate precision and recall by sampling ....
%%
%%%\item unsuprisingly, the amount and quality of matches varies between relations, and so does the quality of our predictor. Table X gives an overview
%%
%%%\item observation 1: the extractors often reach high precision, sometimes at the cost of low recall. w
%%
%%%\item observation 2: some relation highly correlated with target, but not the same: administrative\_divisions. Here, the matches are particularly poor, since they usually are sentences containing a location contains, but not administrative relationship.
%%
%%%\item we also note skew in relations
%%
%%%\item somewhere: how we labeled (facts that are entailed), e.g. nationality, place\_lived
%%
%%%\end{itemize}
%%
%%
%%
%%%\begin{figure}[bt]
%%%\includegraphics[width=3.1in]{Relabeled.pdf}
%%%%\vspace*{1.5in}
%%%\vspace{-8pt}
%%% \caption{Precision / recall curves for \sys's aggregate extraction on
%%%   relabeled and original data. }
%%%   \label{f:relabel}
%%%   \vspace{-8pt}
%%%\end{figure}
%%
%%%\subsection{Improving Performance}
%%
%%%We also report experiments with modifications to the original
%%%weak-supervision data matching algorithm and the set of features used in
%%%the model.  When matching entities from Freebase to the text, we allow
%%%matches to the list of pseudonyms, instead of only using the primary name.
%%%This change significantly increases the number of matching facts, from 4700
%%%to 5500, approximately, in the training set.
%%
%%%Figure~\ref{f:relabel} shows the aggregate extraction precision / recall
%%%curves for \sys\ with the old and new data sets.  The 17\% increase in
%%%training data creates a large improvement in performance, demonstrating the
%%%advantage of careful matching strategies for weak supervision.  It also
%%%suggests that weak-supervision approaches may achieve significantly
%%%improved performance, as the size and quality of the reference database
%%%grows.
%%
%%
%%\subsection{Running Time}
%%
%%One final advantage of our model is the modest running time.   Our implementation of the Riedel~\etal~\shortcite{riedel-ecml10} approach required approximately 6 hours to train on NY Times 05-06 and 4 hours to test on the NY Times 07, each without preprocessing.   Although they do sampling for inference, the global aggregation variables require reasoning about an exponentially large (in the number of sentences) sample space.
%%
%%In contrast, our approach required approximately one minute to train and less than one second to test, on the same data.    This advantage comes from the decomposition that is possible with the deterministic OR aggregation variables.   For test, we simply consider each sentence in isolation and during training our approximation to the weighted assignment problem is linear in the number of sentences.
%%
%%\subsection{Discussion}
%%The sentential extraction results demonstrates the advantages of learning a model that is primarily driven by sentence-level features.
%%Although previous approaches have used more sophisticated features for aggregating the evidence from individual sentences, we demonstrate that aggregating strong sentence-level evidence with a simple deterministic OR that models overlapping relations is more effective, and also enables  training of a sentence extractor that  runs with no  aggregate information.
%%
%%While the Riedel~\etal\ approach does include a model of which sentences express relations, it makes significant use of aggregate features that are primarily designed to do entity-level relation predictions and has a less detailed model of extractions at the individual sentence level.   Perhaps surprisingly, our model is able to do better at both the sentential and aggregate levels.
%%


