\section{Experimental Setup}

\subsection{Data Generation}


\section{Experimental Setup}

We follow the approach of Riedel~\etal~\shortcite{riedel-ecml10} for
generating weak supervision data, computing features, and evaluating
aggregate extraction.  

We also introduce new metrics for measuring sentential extraction
performance, both relation-independent and relation-specific.

\subsection{Data Generation}

We used the same data sets as Riedel~\etal~\shortcite{riedel-ecml10} for
weak supervision.  The data was first tagged with the Stanford NER
system~\cite{finkel-acl05} and then entity mentions were found by
collecting each continuous phrase where words were tagged identically (\ie,
as a person, location, or organization).  Finally, these phrases were
matched to the names of Freebase entities.

Given the set of matches, define $\Sigma$ to be set of NY Times sentences
with two matched phrases, $E$ to be the set of Freebase entities which were
mentioned in one or more sentences, $\Delta$ to be the set of Freebase
facts whose arguments, $e_1$ and $e_2$ were mentioned in a sentence in
$\Sigma$, and $R$ to be set of relations names used in the facts of
$\Delta$.  These sets define the weak supervision data.

\subsection{Features and Initialization}

We use the set of sentence-level features described by
Riedel~\etal~\shortcite{riedel-ecml10}, which were originally developed by
Mintz~\etal~\shortcite{mintz-acl09}.  These include indicators for various
lexical, part of speech, named entity, and dependency tree path properties
of entity mentions in specific sentences, as computed with the Malt
dependency parser~\cite{nivre04} and OpenNLP POS
tagger\footnote{http://opennlp.sourceforge.net/}.  However, unlike the previous work, we did not make
use of any features that explicitly aggregate these properties across
multiple mention instances.

The \sys\ algorithm has a single parameter $T$, the number of training iterations, that must
be specified manually. We used $T=50$ iterations, which performed best in development
experiments.


\subsection{Evaluation Metrics}
\label{s:metrics}

%We use evaluation metrics designed to test accuracy at the aggregate level, by evaluating the set of facts that are extracted, and at the sentence level, by evaluating the quality of individual extractions.

Evaluation is challenging, since only a small percentage (approximately 3\%)
of sentences match facts in Freebase,
and the number of matches is highly unbalanced across relations, as we will see in more detail later. We
use the following metrics.

\paragraph{Aggregate Extraction} Let $\Delta^e$ be the set of extracted
relations for any of the systems; we compute aggregate precision and recall
by comparing $\Delta^e$ with $\Delta$.
  This metric is easily computed but underestimates
extraction accuracy because Freebase is incomplete and some true relations
in $\Delta^e$ will be marked wrong.

\paragraph{Sentential Extraction}  Let $S^e$ be the sentences where some
system extracted a relation and $S^F$ be the sentences that match the
arguments of a fact in $\Delta$.  We manually compute sentential extraction
accuracy by sampling a set of 1000 sentences from $S^e \cup S^F$ and
manually labeling the correct extraction decision, either a relation $r \in
R$ or $none$.   We then report precision and recall for each system on this
set of sampled sentences.   These results provide a good approximation to
the true precision but can overestimate the actual recall, since we did not
manually check the much larger set of sentences where no approach predicted
extractions.


\subsection{Precision / Recall Curves}
To compute precision / recall curves for the
tasks, we ranked the \sys\ extractions as follows.   For sentence-level evaluations, we
ordered according to the extraction factor score $\Phi^{\text{extract}}(z_i,x_i)$. For aggregate
comparisons, we set the score for an extraction $Y^r=true$ to be the max of the extraction
factor scores for the sentences where $r$ was extracted.




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experiments}
\label{s:exp}

To evaluate our algorithm, we first compare it to an existing approach for using multi-instance learning with weak supervision~\cite{riedel-ecml10}, using the same data and features.   We report both aggregate extraction and sentential extraction results.
We then investigate relation-specific performance of our system. Finally, we report running time comparisons.
%We then describe a simple method for improving performance with better data generation and compare running times of the various approaches.

%The \sys\ extractions were done, as described in Section~\ref{s:inf}, by
%independently extracting the highest scoring relation for each sentence,
%and using all of the extracted facts for aggregate level prediction.

\begin{figure}[bt]
\hspace*{-10pt}
\includegraphics[width=3.3in]{Relation.pdf}
%\vspace*{1.5in}
\vspace{-24pt}
 \caption{Aggregate extraction precision / recall curves for
   Riedel~\etal~\shortcite{riedel-ecml10}, a reimplementation of that
   approach ({\sc SoloR}), and our algorithm ({\sc MultiR}).}
   \label{f:agg}
\end{figure}

\subsection{Aggregate Extraction}

Figure~\ref{f:agg} shows approximate precision / recall curves for three
systems computed with aggregate metrics (Section~\ref{s:metrics}) that test
how closely the extractions match the facts in Freebase.  The systems
include the original results reported by
Riedel~\etal~\shortcite{riedel-ecml10} as well as our new model (\sys).  We
also compare with {\sc SoloR}, a reimplementation of their algorithm, which we
built in Factorie~\cite{mccallum2009factorie}, and will use later to evaluate
sentential extraction.

\sys\ achieves competitive or higher precision over all ranges of recall,
with the exception of the very low recall range of approximately 0-1\%.  It also
significantly extends the highest recall achieved, from 20\% to 25\%, with
little loss in precision.  To investigate the low precision in the 0-1\% recall
range, we manually checked the ten highest confidence extractions
produced by \sys\ that were marked wrong. We found that all
ten were true facts that were simply missing from Freebase.  A manual evaluation, as we perform next
for sentential extraction, would remove this dip.

\subsection{Sentential Extraction}


%\begin{table*}[bt]
%\centering
%\small{
%\begin{tabular}{|l|r|r||l|l||l|l||l|l|}
%\hline
%\multicolumn{1}{|c|}{Relation} & \multicolumn{2}{c||}{ Overlaps} & \multicolumn{2}{c||}{ Baseline1 } & \multicolumn{2}{c||}{ Baseline2 } & \multicolumn{2}{c|}{\sys } \\
%  & \#pairs & \#snts & $\tilde{P}_m$ & $\tilde{R}_m$ & $\tilde{P}_m$ & $\tilde{R}_m$ & $\tilde{P}_m$ & $\tilde{R}_m$ \\
%\hline
%/business/person/company                      &  1 &   2 & .971& .371& .941 & .360 & .74  & .427 \\
%/people/person/place\_lived                   & 63 &  97 & .833& .083& .833 & .083 & .87  & .083 \\
%/location/location/contains                   & 96 & 708 & 1   & .255& 1    & .51  & .8   & .59  \\
%/business/company/founders                    &  0 &   0 & .889& .174& .667 & .13  & .667 & .13  \\
%/people/person/nationality                    &  3 &   4 & 1   & .146& 1    & .22  & .59  & .22  \\
%/location/neighborhood/neighborhood\_of       &  0 &   0 & 1   & .111& 1    & .111 & .469 & .111 \\
%/people/person/children                       &  0 &   0 & 1   & .083& 1    & .125 & .786 & .125 \\
%/people/deceased\_person/place\_of\_death     & 19 &  36 & 1   & .2  & 1    & .267 & .905 & .267 \\
%/people/person/place\_of\_birth               & 59 &  82 & 1   & .083& 1    & .083 & .286 & .25  \\
%/business/company/advisors                    &  0 &   0 & N/A & 0   & N/A  & 0    & N/A  & 0    \\
%/location/country/administrative\_divisions   & 60 & 412 & N/A & 0   & N/A  & 0    & N/A  & 0    \\
% other relations with lots of overlaps: /location/country/capital (487), /location/us_state/capital (39) ...
%  Overlapping combinations (%pairs, %snts):
%1       1       /people/person/place_of_birth,/people/deceased_person/place_of_burial,/people/person/place_lived
%42      145     /location/location/contains,/location/country/administrative_divisions
%14      221     /location/location/contains,/location/country/capital
%2       11      /location/province/capital,/location/location/contains
%9       39      /location/location/contains,/location/us_state/capital
%3       3       /people/person/place_lived,/people/person/place_of_birth,/people/deceased_person/place_of_death
%1       2       /business/person/company,/people/person/nationality
%17      266     /location/location/contains,/location/country/capital,/location/country/administrative_divisions
%9       23      /location/location/contains,/location/us_county/county_seat
%1       1       /people/person/place_lived,/people/person/nationality
%24      28      /people/person/place_of_birth,/people/person/place_lived
%24      40      /people/person/place_lived,/people/person/place_of_birth
%10      24      /people/person/place_lived,/people/deceased_person/place_of_death
%1       1       /location/location/contains,/base/locations/countries/states_provinces_within,/location/country/administrative_divisions
%2       2       /location/location/contains,/location/br_state/capital
%1       1       /people/person/place_of_birth,/people/person/nationality
%6       9       /people/person/place_of_birth,/people/deceased_person/place_of_death

% Overlapping relations (%pairs, %snts):
%96      708     /location/location/contains
%59      82      /people/person/place_of_birth
%63      97      /people/person/place_lived
%9       23      /location/us_county/county_seat
%1       1       /people/deceased_person/place_of_burial
%3       4       /people/person/nationality
%1       1       /base/locations/countries/states_provinces_within
%2       11      /location/province/capital
%1       2       /business/person/company
%31      487     /location/country/capital
%2       2       /location/br_state/capital
%60      412     /location/country/administrative_divisions
%19      36      /people/deceased_person/place_of_death
%9       39      /location/us_state/capital



%\hline
%\end{tabular}
%}
%\caption{Number of overlapping matches to Freebase, as well as estimated precision and recall for \sys\ and two baselines. }
%\label{t:sent2}
%\vspace{-8pt}
%\end{table*}

%Baseline1 overall: pr .977, re .224
%Baseline2 overall: pr .981, re .380




\begin{figure}[bt]
\hspace*{-10pt}
\includegraphics[width=3.3in]{Sent.pdf}
%\vspace*{1.5in}
\vspace{-24pt}
 \caption{Sentential extraction precision / recall curves for \sys\ and
   {\sc SoloR}.}
   \vspace{-5pt}
   \label{f:sent}
\end{figure}

Although their model includes variables to model sentential extraction,
Riedel~\etal~\shortcite{riedel-ecml10} did not report sentence level
performance.  To generate the precision / recall curve we used the joint
model assignment score for each of the sentences that contributed to
the aggregate extraction decision.


Figure~\ref{f:agg} shows approximate precision / recall curves for
\sys\ and {\sc SoloR} computed against manually generated sentence labels, as defined in Section~\ref{s:metrics}.   \sys\ achieves significantly higher recall with a consistently high level of precision.    At the highest recall point, \sys\ reaches 72.4\% precision and 51.9\% recall, for
an F1 score of 60.5\%.

%\begin{itemize}
%\item say how we sampled up to 100 for each relation
%\item if we did not reweight (then bias towards performance of dominant relation /location/location/contains, then precision 98.2\% at recall 43.4\%.
%\item how we extended the curve
%\end{itemize}

\subsection{Relation-Specific Performance}

Since the data contains an unbalanced number of instances of each relation, we
also report precision and recall for each of the ten most frequent relations.
Let $S^{M}_r$ be the sentences where \sys\ extracted an instance of relation
$r \in R$, and let $S^F_r$ be the sentences that match the arguments of a
fact about relation $r$ in $\Delta$. For each $r$, we sample 100 sentences
from both $S^{M}_r$ and $S^F_r$ and manually check accuracy. To estimate
precision $\tilde{P}_r$ we compute the ratio of true relation mentions in
$S^{M}_r$, and to estimate recall $\tilde{R}_r$ we take the ratio of true relation mentions
in $S^F_r$ which are returned by our system.

\begin{table*}[bt]
\begin{small}
\begin{center}
\begin{tabular}{|c|r|c||l|l|}
\hline
\multirow{2}{*}{Relation} & \multicolumn{2}{c||}{Freebase Matches} & \multicolumn{2}{c|}{ \sys} \\
         & ~\#sents~~ & \% true & $\tilde{P}$ & $\tilde{R}$ \\
\hline
/business/person/company                     & 302  & 89.0  & 100.0  & 25.8 \\
/people/person/place\_lived                  & 450  & 60.0   & ~~80.0  & ~~6.7 \\
/location/location/contains                  & 2793 & 51.0  & 100.0  & 56.0  \\
/business/company/founders                  & 95   & 48.4 & ~~71.4 & 10.9  \\
/people/person/nationality                   & 723  & 41.0  & ~~85.7  & 15.0  \\
/location/neighborhood/neighborhood\_of      & 68   & 39.7 & 100.0 & 11.1 \\
/people/person/children                       & 30   & 80.0   & 100.0 & ~~8.3 \\
/people/deceased\_person/place\_of\_death     & 68   & 22.1 & 100.0 & 20.0 \\
/people/person/place\_of\_birth              & 162  & 12.0  & 100.0 & 33.0  \\
/location/country/administrative\_divisions  & 424  & ~~0.2 & N/A  & ~~0.0    \\
\hline
\end{tabular}
\end{center}
\end{small}
\caption{Estimated precision and recall by relation, as well as the number of matched sentences (\#sents) and accuracy (\%~true) of matches between sentences and facts in Freebase. }
\label{t:sent}
\end{table*}

Table~\ref{t:sent} presents this approximate precision and recall for \sys\ on each of the relations,
along with statistics we computed to measure the quality  of the weak supervision.
Precision is high for the majority of relations but recall is consistently lower.
%However, the matches to Freebase, too, have low recall (25.7\%\footnote{On a sample of 2000 annotated
%sentences, we found that match recall is about 25.7\% and precision around 41.3\%.}).
We also see that the Freebase matches are highly skewed in quantity
and can be low quality for some relations,
with very few of them actually corresponding to true extractions.
The approach generally performs best on the relations with a sufficiently large number of true matches,
in many cases even achieving precision that outperforms the accuracy of the heuristic matches, at reasonable
recall levels.

\subsection{Overlapping Relations}

Table~\ref{t:sent} also highlights some of the effects of learning with overlapping relations.  For example, in the data, almost all of the matches for the administrative\_divisions relation overlap with the contains relation, because they both model relationships for a pair of locations.  Since, in general, sentences are much more likely to describe a contains relation, this overlap leads to a situation were almost none of the administrate\_division matches are true ones, and we cannot accurately learn an extractor. However, we can still learn to accurately extract the contains relation, despite the distracting matches.   Similarly, the place\_of\_birth and place\_of\_death relations tend to overlap, since it is often the case that people are born and die in the same city.   In both cases, the precision outperforms the labeling accuracy and the recall is relatively high.  %However,  the birth place is much harder to learn to predict overall, given the  low match accuracy.

To measure the impact of modeling overlapping relations, we also evaluated a simple, restricted baseline.  Instead of labeling each entity pair with the set of all true Freebase facts, we created a dataset where each true relation was used to create a different training example. Training \sys\ on this data simulates effects of conflicting supervision that can come from not modeling overlaps.   On average across relations, precision increases 12 points but recall drops 26 points, for an overall reduction in F1 score from 60.5\% to 40.3\%.
%the recall for this baseline drops approximately 20 points to 22.4\% and the precision drops almost 5 points to 72.9\%.
% for baseline 2 F1 score reaches .56.


%\begin{itemize}
%\item challenge, vast number of NAs: in fact, 96.9\% of entity pairs do
%not have any matches to Freebase
%\item quality of the matches: on a sample of about 2000 sentences, we found that
%across relations matching recall only around 25.7\%, matching precision around 41.3\%
%\item evaluating relation-specific quality based on a manually annotated sample of the test set unpractical, due to small number of mentions with relations
%\item instead we approximate precision and recall by sampling ....

%\item unsuprisingly, the amount and quality of matches varies between relations, and so does the quality of our predictor. Table X gives an overview

%\item observation 1: the extractors often reach high precision, sometimes at the cost of low recall. w

%\item observation 2: some relation highly correlated with target, but not the same: administrative\_divisions. Here, the matches are particularly poor, since they usually are sentences containing a location contains, but not administrative relationship.

%\item we also note skew in relations

%\item somewhere: how we labeled (facts that are entailed), e.g. nationality, place\_lived

%\end{itemize}



%\begin{figure}[bt]
%\includegraphics[width=3.1in]{Relabeled.pdf}
%%\vspace*{1.5in}
%\vspace{-8pt}
% \caption{Precision / recall curves for \sys's aggregate extraction on
%   relabeled and original data. }
%   \label{f:relabel}
%   \vspace{-8pt}
%\end{figure}

%\subsection{Improving Performance}

%We also report experiments with modifications to the original
%weak-supervision data matching algorithm and the set of features used in
%the model.  When matching entities from Freebase to the text, we allow
%matches to the list of pseudonyms, instead of only using the primary name.
%This change significantly increases the number of matching facts, from 4700
%to 5500, approximately, in the training set.

%Figure~\ref{f:relabel} shows the aggregate extraction precision / recall
%curves for \sys\ with the old and new data sets.  The 17\% increase in
%training data creates a large improvement in performance, demonstrating the
%advantage of careful matching strategies for weak supervision.  It also
%suggests that weak-supervision approaches may achieve significantly
%improved performance, as the size and quality of the reference database
%grows.


\subsection{Running Time}

One final advantage of our model is the modest running time.   Our implementation of the Riedel~\etal~\shortcite{riedel-ecml10} approach required approximately 6 hours to train on NY Times 05-06 and 4 hours to test on the NY Times 07, each without preprocessing.   Although they do sampling for inference, the global aggregation variables require reasoning about an exponentially large (in the number of sentences) sample space.

In contrast, our approach required approximately one minute to train and less than one second to test, on the same data.    This advantage comes from the decomposition that is possible with the deterministic OR aggregation variables.   For test, we simply consider each sentence in isolation and during training our approximation to the weighted assignment problem is linear in the number of sentences.

\subsection{Discussion}
The sentential extraction results demonstrates the advantages of learning a model that is primarily driven by sentence-level features.
Although previous approaches have used more sophisticated features for aggregating the evidence from individual sentences, we demonstrate that aggregating strong sentence-level evidence with a simple deterministic OR that models overlapping relations is more effective, and also enables  training of a sentence extractor that  runs with no  aggregate information.

While the Riedel~\etal\ approach does include a model of which sentences express relations, it makes significant use of aggregate features that are primarily designed to do entity-level relation predictions and has a less detailed model of extractions at the individual sentence level.   Perhaps surprisingly, our model is able to do better at both the sentential and aggregate levels.



