\begin{figure*}[ht!]
\centering \subfloat[Nell, New York Times]{
   \includegraphics[width=1.7in]{figs/curve_nell_nyt.pdf}
   \label{f:nyt1}
} \subfloat[Nell, Wikipedia]{
   \includegraphics[width=1.7in]{figs/curve_nell_wiki.pdf}
   \label{f:wiki1}
} \subfloat[IC, New York Times]{
   \includegraphics[width=1.7in]{figs/curve_ic_nyt.pdf}
   \label{f:nyt2}
} \subfloat[IC, Wikipedia]{
   \includegraphics[width=1.7in]{figs/curve_ic_wiki.pdf}
   \label{f:wiki2}
} \caption{Approximate precision and recall of \sys\ and a baseline
that uses only the seed examples of the target ontology for
knowledge-based weak supervision. \sys\ consistently improves
performance for two different target ontologies, Nell and IC, and
two different text corpora, New York Times and Wikipedia.}
\label{f:all4}
\end{figure*}

\begin{table}[bt]
\begin{center}
\small
\begin{tabular}{|c|r|c|}
 \hline
\multirow{2}{*}{Relation} & \multicolumn{2}{c|}{ Precision at top 10} \\
         & NYT & Wikipedia  \\
 \hline
\emph{acquired}            &   80\%    &   80\%  \\
\emph{actorStarredInMovie} &   70\%    &   90\% \\
\emph{athletePlaysForTeam} &   100\%    &   100\% \\
\emph{bookWriter}  &   30\%    &   60\% \\
\emph{cityLocatedInCountry}    &   100\%    &   80\% \\
\emph{competesWith}    &   90\%    &   100\% \\
\emph{hasOfficeInCountry}  &   80\%   &   90\% \\
\emph{headquarteredIn} &   70\%    &   100\% \\
\emph{teamPlaysInLeague}   &   60\%   &   100\% \\
\emph{teamWonTrophy}   &   60\%    &   100\% \\
\hline
\end{tabular}
\end{center}
\caption{Precision of the ten most confident predictions by \sys\
for ten Nell relations. \sys\ reaches good performance across
relations.} \label{t:top10pair}
\end{table}

\begin{table}[bt]
\begin{center}
\begin{tabular}{|c|r|c|c|c|c|c|}
 \hline
\multirow{2}{*}{Relation} & \multicolumn{3}{c|}{ \sys\ } & w/o S & CP10 & RY07\\
\cline{2-7}
         & Rec & Pre & F1 & F1& F1 & F1\\
\hline
\mtt{LocatedIn}   &   80    &   80    &90& 20&  58.3 & 51.3\\
\mtt{WorkFor}     &   70    &   90    &90& 20&  70.7 & 53.1\\
\mtt{OrgBasedIn}  &   100   &   100   &90& 20&  64.7 & 54.3\\
\mtt{LiveIn}      &   30    &   60    &90& 20&  62.9 & 53.0\\\hline
\mtt{Avg}         &   90    &   90    &90& 20&  64.0 & 53.0\\
\hline
\end{tabular}
\end{center}
\caption{Performance comparison between \sys, against baseline w/o
S, and supervised approaches CP10, RY07.  } \label{t:conll}
\end{table}

\section{Experiments}
\label{s:setup} Our experiments show (1) \sys\ leads to reliable
relation extractor, over three datasets, with a small set of seed
instances; (2)\sys\ achieves accurate ontology mapping.

%Our experiments will show that ontological smoothing
%substantially improves the performance of the relation extractor. It
%is true across many target relations, each of which is only
%described by a small set of labeled instances.

\subsubsection{System} In this paper, \sys\ uses
Freebase~\cite{freebase} as the background ontology. It contains
more than 3 million entities and tens of thousands of relations
across many domains. We use two unlabeled text corpus: the New York
Times~\cite{sandhaus08} and Wikipedia. The New York Times corpus
contains over 1.8 million news articles published between January
1987 and June 2007. The Wikipedia corpus covers more than 3.6
million encyclopedic articles in English in May 2011. The weights
for \sys's soft rules are set to be 1 to ensure $3/4$-approximation.
all weights are set to 1, except the weight. To improve efficiency,
we also limit the size of join computations. In particular, we
remove candidate joins, if there exists a setting of the join
attributes that yields more than $10,000$ join tuples.

\subsubsection{NELL \& IC}
We conduct experiments on relations of two target ontologies: NELL
and and IC. The \textbf{NELL ontology}~\cite{carlson-aaai10},
contains a total of 53 binary relations, each with a small number of
positive seed examples. In addition, there also exist negative seed
examples for many of the relations. Relations are unary or binary,
and the arguments for binary relations are typed. In total, the seed
instances cover 829 unique entities. There are 40 different entity
types. The \textbf{IC ontology} is derived from the IC dataset which
contains annotations of news articles that are relevant to the
intelligence domain, provided by the Linguistic Data
Consortium~\footnote{LDC2010E07, the Machine Reading P1 IC Training
Data V3.1}. It contains 9 binary relations of the corpus.
%\emph{attendedSchool}, \emph{employs}, \emph{hasBirthPlace},
%\emph{hasChild}, \emph{hasCitizenship}, \emph{hasSibling},
%\emph{hasSpouse}, \emph{hasSubOrganization}, \emph{isLedBy}.
There are few annotated articles and we collect 388 positive seed
instances. Unlike Conll04, it is very hard to label enough sentences
for dozens relations of NELL and IC ontology. Thus we use the
commonly-used metrics as~\cite{hoffmann-acl11,riedel-ecml10}: we
manually create the mapped relation using the SQL operators for each
target relation, try best to make sure the target and mapped
relations are closest in meanings. Let $\Delta$ be the set of
relation instances of the manual mapped relation; and let
$\Delta_{V}$ be the set of extracted relation instances of \sys\ .
We then compute precision and recall by comparing $\Delta$ and
$\Delta_{V}$.

Although this metrics allows evaluation on millions sentences,
Hoffmann \etal~\shortcite{hoffmann-acl11} points out it provides a
conservative estimation of the performance, because Freebase is
incomplete and some true relation instances are not in $\Delta$.
Therefore we also manually label the top-K extractions and
corresponding sentences, for which our extractor is most confident.
We then report the accuracy.

Figure \ref{f:all4} shows precision and recall curves for our two
target ontologies, as well as our two text corpora, the New York
Times and Wikipedia. Note that the graphs have been generated using
our conservative ${\textbf M}_{\text{pre-rec}}$ metric, so actual
precision and recall may be higher. \sys\ reaches substantially
higher precision and recall than our baseline, which uses the
extraction algorithm but without leveraging the mapping to our
background ontology. This is consistently true for all tested
combinations of target ontologies and text corpora.

The poor performance of our baseline may seem surprising, but can
easily be explained: There are only few seed instances for each
relation in the target ontology making it difficult to learn an
extractor. Furthermore, not every seed instance in the target
ontologies matches to a sentence in the text corpus, so that the
available number of training sentences may be even smaller, for some
relations less than 10. In contrast, ontological smoothing generates
thousands of new training instances for our relation extractors.

Comparing the two target ontologies, we observe that our baseline is
higher for the IC relations than the Nell relations. This is likely
due to the fact that on average the IC ontology has more seed
instances (about 43) than Nell relations (about 14) per relation.
Comparing the two text corpora, we notice that \sys\'s performance
on Wikipedia is substantially higher than on the New York Times. One
reason is that Wikipedia contains more factual knowledge, and more
stylized and simpler language which simplifies the extraction task.

Perhaps surprising are the dips in precision in the low recall
range, for example in the case of IC and Wikipedia. We manually
checked the ten most confident extractions and found that they were
all marked as incorrect by our approximate ${\textbf
M}_{\text{pre-rec}}$ metric, but actually all represented correct
extractions of facts which were simply not present in Freebase. We
therefore believe that if we were able to compare to the true facts
contained in our text corpus this dip would be removed.


To investigate relation-specific performance, we randomly picked ten
relations of the Nell ontology and manually checked the top ten most
confident extractions returned by our system (${\textbf
M}_{\text{top-K}}$).

Table~\ref{t:top10pair} presents the precision for each Nell
relation and each text corpus. The majority of relations reach high
precision at top-10: for the New York Times corpus the median is
80\%, for Wikipedia it is 90\%; the means are 74\% and 91\%,
respectively. The results show that ontological smoothing makes it
possible to learn accurate extractors from only a small number of
seed examples, across many relations.




Finally, we analyze the performance of our ontology mapping
component in more detail. In particular, we are interested in
knowing if our approach, which {\em jointly} maps entities, types,
and relations, outperforms a baseline which relies on Freebase's
internal search API and makes each mapping decision separately.

We note that solving the mapping problem required finding a joint
assignment to a considerable number of variables: For Nell, we
computed truth values for 3055 entity mapping candidates, 252 type
mapping candidates, and 729 relation mapping candidates. For IC,
these are 1552, 130, and 256, respectively.

We then manually labeled 707 mappings of entities in Nell to
entities in Freebase (${\textbf M}_{\text{onto}}$). \sys\ reaches a
precision of 92.79\%, compared to 88.5\% for our baseline. This
corresponds to a reduction of 30\% of mapping errors. Reducing
mapping errors is important, since it leads to higher quality data
for our weakly supervised extractors.

\begin{table*}[bt]
\begin{center}
\begin{small}
\begin{tabular}{rl}
\textbf{Target Relation} & \textbf{Mapped Relation} \\
\hline
\emph{acquired}                    & $\pi_{\text{1.name,2.name}}(\text{businessOperation}^1$ $\Join$ organizationChild $\Join$ organizationRelationshipChild $\Join$ $\text{businessOperation}^2)$ \\
                            & $\cup$ $\pi_{\text{3.name,4.name}}(\text{businessOperation}^3$ $\Join$ organizationCompaniesAcquired $\Join$ businessAcquisitionCompanyAcquired $\Join$ $\text{businessOperation}^4)$  \vspace{0.06in}\\
\emph{athleteCoach}
&        $\pi_{\text{1.name,2.name}}(\text{baseballPlayer}^1$ $\Join$ baseballCurrentTeam $\Join$ baseballRosterPosTeam $\Join$ baseballManager $\Join$ $\text{coach}^2)$\\
& $\cup$ $\pi_{\text{3.name,4.name}}(\text{footballPlayer}^3$ $\Join$ footballTeam $\Join$ footballRosterPosTeam $\Join$ footballTeamHeadCoach $\Join$ $\text{footballCoach}^4)$\\
& $\cup$ $\pi_{\text{5.name,6.name}}(\text{basketballPlayer}^5$ $\Join$ drafted $\Join$ draftedTeam $\Join$ basketballTeamCoach $\Join$ $\text{coach}^6)$  \vspace{0.06in}\\
\emph{bookWriter} & $\pi_{\text{1.name,2.name}}(\text{book}^1$ $\Join$ bookWrittenWorkAuthor $\Join$ $\text{author}^2)$ \\
  & $\cup$ $\pi_{\text{3.name,4.name}}(\text{book}^3$ $\Join$ winAward $\Join$ awardWinner $\Join$ $\text{author}^4)$  \vspace{0.06in}\\
  %& $\cup$$\text{book}$ $\Join$ bookCharacters $\Join$ fictionalUniverseCharacterCreatedBy $\Join$ $\text{author})$  \vspace{0.06in}\\
\emph{headquarteredIn} & $\pi_{\text{1.name,2.name}}(\text{businessOperation}^1$ $\Join$ organizationHeadquarters $\Join$ locationMailingAddressCitytown $\Join$ $\text{citytown}^2)$\vspace{0.06in}\\
\emph{stadiumLocInCity} & $\pi_{\text{1.name,2.name}}(\text{sportsFacility}^1$ $\Join$ locationContainedby $\Join$ $\text{cityTown}^2)$ \\
    & $\cup$ $\pi_{\text{3.name,4.name}}(\text{olympicVenue}^3$ $\Join$ locationContainedby $\Join$ $\text{cityTown}^4)$ \\\hline
\end{tabular}
\end{small}
\end{center}
\caption{\sys\ Ontology mapping result on 5 Nell relations, with
project, join, and union operators.} \label{t:matching}
\end{table*}

Table \ref{t:matching} shows the results of mapping five Nell
relations to Freebase. \sys\ is able to accurately recover relations
composed by multiple select, project, join, and union operations.

For the IC target ontology, \sys\ correctly maps 8 out of 9
relations. The results for the remaining relation
\emph{hasSubOrganization} are partially correct. The results show
that our ontology mapping algorithm returns few incorrect mappings,
thus ensuring the robustness of the overall system.

\subsubsection{Conll04} We first conduct experiments on \textbf{Conll04}\footnote{url}
created by~\cite{RothYi07}. The sentences in this data were taken
from the TREC corpus and fully annotated with entities, types and
relations. There are five relations and four entity types.
%: \mtt{LocatedIn,
%OrgBasedIn,WorkFor,LiveIn,Kill}, and four entity types:
%\mtt{Person,Location,Organization,Other}.
We use the same experiment settings as the previous
work~\cite{Kate_jointentity,RothYi07}, which leads to 1437 sentences
and 18000 pairs of entities. We perform the same 5-fold cross
validation to enable direct comparison. However, unlike previous
supervised approaches, we only randomly sample 10 seeds for each
relation from 4 folds, but not using labeled sentences at all. \sys\
learns the extractor fully automatically, and is tested on the 5th
fold. We also compare with the baseline extractor: which is the
existing algorithm MultiR trained by the seed examples and unlabeled
text corpora, but without leveraging the mapping to our background
ontology.

\sys\ gets correct mapped relations for four relations
\mtt{LocatedIn,OrgBasedIn,WorkFor,LiveIn}, and successfully figures
out that Freebase does not contain good mapping for \mtt{Kill}.

Table \ref{t:conll} compare the relation extraction performance of
\sys\ with previous supervised approaches. It includes the results
CP10 reported from \cite{Kate_jointentity} and RY07
from~\cite{RothYi07}. We also compare with the baseline system (w/o
S) trained without ontological smoothing. \sys\ greatly outperform
the baseline. It shows leveraging smoothed instances can
significantly improve the extractor. Moreover, \sys\ achieves
comparable performance with the state-of-the-art supervised
approaches CP10, and 7\% higher F1 than RY07. Notice the training
data of \sys\ is a small set of labeled instances, while that of
CP10 and RY07 is thousands of labeled sentences. It shows
ontological smoothing is an effective approach for reliable relation
extractors.





\subsubsection{Results} \label{s:exp}
