\section{Experimental Setup}
\label{s:setup}

In our experiments, we would like to show that ontological smoothing
substantially improves the performance of an extractor. We would
like to show that this is true across many target relations, each of
which is only described by a small set of seed examples.
Furthermore, we would like to separately investigate the performance
of \sys's crucial ontology mapping component.

\subsection{Target Ontologies}
We test \sys\ on two different target ontologies, each of which makes
extraction particularly challenging.

%\begin{itemize}
%\item
{\bf Nell Ontology}: \; The Nell ontology~\cite{carlson-aaai10},
contains a total of 53 binary relations, each with a small number of
positive seed examples. In addition, there also exist negative seed
examples for many of the relations. Relations are unary or binary,
and the arguments for binary relations are typed. In total, the seed
instances cover 829 unique entities. There are 40 different entity
types. The ontology contains some inconsistencies. For example,
``Yankee", ``Yankees" and ``New York Yankees" appear as separate
entities in different relations, although they point to the same
real-world entity.

%\item
{\bf IC Ontology}: \; The IC ontology is derived from the IC dataset
which contains annotations of news articles that are relevant to the
intelligence domain. For example, the dataset includes articles
about terrorists and attacks. It is provided by the Linguistic Data
Consortium~\footnote{LDC2010E07, the Machine Reading P1 IC Training
Data V3.1}. In total, it covers annotations for 33 relations, but
due to limitations of MultiR, we are currently only able to handle
binary relations. We test \sys\ for the 9 binary relations of the
corpus: \emph{attendedSchool}, \emph{employs}, \emph{hasBirthPlace},
\emph{hasChild}, \emph{hasCitizenship}, \emph{hasSibling},
\emph{hasSpouse}, \emph{hasSubOrganization}, \emph{isLedBy}. We
collected instances for these relations from the annotated articles.
We assumed gold coreference annotations, and replaced arguments
that consisted of pronouns or nominals with their named entities.
%In cases where annotations pointed to pronouns or nominals as
%arguments, we manually resolved those to the corresponding named
%entities.
We allowed a maximum of 100 seed instances per relation.
In total, the obtained IC ontology contains 388 positive seed
examples, but no negative examples. This ontology too contains some
inconsistencies, since the same named entities are sometimes
referred to by different names. Furthermore, many of the annotated
entities are less common ones.
%\end{itemize}

\subsection{Background Ontology}
As our background ontology, we use Freebase~\cite{freebase} in its version
\footnote{http://download.freebase.com/datadumps/latest/freebase-datadump-quadruples.tsv.bz2}
 of May 2011. Freebase is ideally suited as a background ontology for various reasons:

\begin{itemize}
  \item Freebase contains more than 3 million entities and tens of thousands of relations
     across a large number of domains. This makes it likely that there exists relevant
     information for virtually any target ontology.
  \item Freebase is organized as a database of triples of the form \texttt{<arg1, relation,
     arg2>}. This reduces the amount of ambiguity and enables easy processing.
%  \item Freebase entities are often linked to corresponding Wikipedia pages.
%     The content of a Wikipedia page can be additionally used for disambiguation, for
%     example as shown in section \ref{Arguments_occur_together}.
  \item Freebase often contains synonyms for entities. Synonyms enable greater recall when
     heuristically matching instances to text.
  \item Freebase ranks candidate entities in its keyword search API. We use this ranking
     for identifying initial candidate mappings of entities in our target ontologies (our
     algorithm reduces the mapping error by a further 30\%).
\end{itemize}

Despite its advantages, Freebase also poses important challenges: It
makes heavy use of n:m helper relations in order to accurately represent its
vast amount of knowledge. Even simple facts, such as a player's coach, are
often not directly available, but can only be obtained by following a long
chain of links, and merging the results of several relations. Furthermore,
there exist redundant relations, a large number of irrelevant entities,
while many important facts are missing.

\comment{
In Freebase, joined relation may bring extreme large join. For
example by joining two relations ``{\em personBornInCountry}" and
``{\em countryHasCity}", the person argument will relate to all
cities in his motherland. Such extremely large join is useless in
relation extraction, and hence we will filter them away. \comment{The
heuristic in this paper is: suppose $(e_1,e_2)$ are instances of the
joined relation $r$, and having the same ``middle" entity $e_0$. If
$|\{e_1\}|>100$ and $|\{e_2\}|>100$, $r$ will be filtered away.}
}


\subsection{Text Corpora}

We evaluate \sys\ on two text corpora, the New York Times and Wikipedia.
The New York Times corpus~\cite{sandhaus08} contains over 1.8 million news articles
published between January 1987 and June 2007. The Wikipedia corpus covers
more than 3.6 million encyclopedic articles in English language from
May 2011.

\subsection{Parameter Settings}
The weights for \sys's set of rules are currently set as follows:
All weights are set to 1, except the weight for rule \ref{eq:negativeinstances} and \ref{rule:mutualexclusion}
which is set to 100. We use a high weight for that rule since we
would not like to let it be violated.
In future work, we would like to automatically learn weights for
\sys's set of rules.

%\sys\ aims to design a general ontology mapping algorithm.
%Therefore, it sets weights as simple as possible to increase its
%flexibility. In this paper, for negative evidence rule in Equation
%\ref{eq:negativeinstances}, we set the weight to be 100 to make the
%rule ``hard". For all other rules, we simply set the weight to be 1.

To improve efficiency, we also limit the size of join computations
\footnote{In particular, we remove candidate joins, if there exists
a setting of the join attributes that yields more than $100 \times 100$
join tuples.}
For example, the result set of $\emph{personBornInCountry} \Join \emph{countryHasCity}$
lists for each person all cities in her home country.

%One may notice some joined relation may bring extreme large join.
%For example by joining two relations ``{\em personBornInCountry}"
%and ``{\em countryHasCity}", the person argument will relate to all
%cities in his motherland. Such extremely large join is useless in
%relation extraction, and hence we will filter them away.
%\footnote{The heuristic in this paper is: suppose $(e_1,e_2)$ are
%instances of the joined relation $r$, and having the same ``middle"
%entity $e_0$. If $|\{e_1\}|>100$ and $|\{e_1\}|>100$, $r$ will be
%filtered away.}

\subsection{Overall Performance Metrics}

To evaluate overall system performance, we run the full system which
includes mapping the target ontology to the background ontology,
generating training data using weak supervision, and learning an
extractor.

Evaluating the quality of the learned extractor is challenging,
however, since less than 1\% of sentences in our large text corpora
contain relations relevant to our target ontologies. We therefore
compute two approximate metrics:

%\begin{itemize}
%\item
${\textbf M}_{\text{pre-rec}}$: \; For each relation in our target ontologies we manually create
  a view in our background ontology using the select, project, join,
  and union operators. We create that relation such that it most
  accurately matches the meaning of the corresponding relation in
  the target ontology. We then collect all instances of the newly
  created relation in the background ontology. Let this set be $B$.
  We next filter this set, keeping only those instances for which there
  exists a sentence in our text corpus $c$ that contains both
  arguments. Let us denote this filtered set by $\tilde{G}^c \subseteq B$.
  We use $\tilde{G}^c$ as an approximation to $G^c$, the set of
  facts contained in one of our text corpora $c$. To estimate
  precision and recall of our extractor, we simply compare its set
  of extractions $E^c$ on corpus $c$ to $\tilde{G}^c$. In practice, this approach
  provides a very conservative estimate of the quality of our extractor,
  since many facts in $\tilde{G}^c$ are not contained in our text corpora.
  We compute precision and recall curves by varying the confidence
  threshold of our extractor.

%\item
${\textbf M}_{\text{top-K}}$: \; To evaluate relation-specific performance of \sys, we manually
  check the top-K extractions, for which our extractor is most confident.
  In our experiments, we set $K=10$. To verify an extraction, we manually
  check all sentences which contain both arguments.
%\end{itemize}

To ensure that we are testing the quality of the learned extractor
independently of the entity pairs used during training, we further
require that not only the sentences, but also none of the entity pairs
at test time has been seen at training time.

As a baseline, we train the MultiR extractor using only the seed examples
obtained from the target ontology, but without leveraging the mapping
to our background ontology.

\subsection{Ontology Mapping Metric}

The ontology mapping component is \sys's most important one, so we
are also interested in evaluating its performance independently from
relation extraction.

${\textbf M}_{\text{onto}}$: \; We investigate precision and recall for entity mapping,
type mapping, and relation mapping by manually validating the
individual decisions. Note that our algorithm does not always return
a mapping element in the background ontology for an element in the
target ontology. This often makes sense, since Freebase, although
large, does not cover all entities, types, or relations.

As a baseline, we use a naive ontology mapper which does not perform
joint inference, but merely uses Freebase's internal search API to
map objects in the target ontology to objects in Freebase.

%Our joint match algorithm does entity matching and ontology matching
%together. We will compute the precision and recall value for entity
%matching, type matching and relation matching respectively. As
%lgorithm \ref{alg1} shows, we assign true value to matching
%variable when $w1>w2$. That is to say, for some relation, our
%algorithm may not return anything. This is intuitively correct:
%background ontology Freebase is large but will not cover everything,
%it is wise to return nothing for those relations.



%\subsection{Relation Extraction Evaluation}

%Entity pairs are collected by Stanford NER system. We enumerate all
%entity pairs in each sentence.

%We split the entity pairs into training set and the test set, in
%order to make sure all instances in the test set are not seen
%before.

%For training set, we label them by seed instances and the instances
%generated by the matched Freebase view. That is, if we match the
%Nell relation $r$ into the Freebase View $V_f$, and the training
%entity pair exists in $V_f$, we label the entity pair with relation
%$r$. Other entity pairs do not appear in the seed instances or
%Freebase Views will be labeled as ``no label", i.e. negative data.

%For the testing set, labeling is challenging, since only a small
%percentage of the entity pairs are not negative. For example, in the
%training set, less than 1\% instances match to seed instances or
%generated Freebase View. That is to say, even if we randomly label
%thousands entity pairs, we will only get dozens labeled instances.
%They are not enough to evaluate the relation extraction performance.

%In this paper, we use two different strategies to evaluate our
%relation extraction performance. First, we manually generate the
%gold ontology matching. That is, for each target relation, we
%manually write down the SQL query and search Freebase ontology with
%this query. That is, we get a gold Freebase View for each target
%relation. By assuming the entity pairs in that gold Freebase View
%are the right instance for the target relation, we label the entity
%pairs in the test set with that relation if it exists in the gold
%Freebase View.

%We compute precision/recall curves for relation extraction on
%Wikipedia articles and New York Time corpus. We rank the predictions
%of \mbox{MultiR} by the confidence and plot a point on P/R curve for
%each prediction.

%\subsection{Ontology Matching Evaluation}
%Our joint match algorithm does entity matching and ontology matching
%together. We will compute the precision and recall value for entity
%matching, type matching and relation matching respectively. As
%algorithm \ref{alg1} shows, we assign true value to matching
%variable when $w1>w2$. That is to say, for some relation, our
%algorithm may not return anything. This is intuitively correct:
%background ontology Freebase is large but will not cover everything,
%it is wise to return nothing for those relations.

\section{Experiments}
\label{s:exp} We first report on overall relation extraction
performance, and then investigate relation-specific results.
Finally, we report detailed results of \sys's ontology mapping
component.
%including P/R curve and top 100 accuracy on Wikipedia and New York
%Time dataset. We then report the results of our joint matching
%algorithm, including entity matching, type matching and relation
%matching.


\begin{figure*}[ht]
\centering \subfloat[Nell, New York Times]{
   \includegraphics[width=3.4in]{figs/curve_nell_nyt.pdf}
   \label{f:nyt1}
}
\subfloat[Nell, Wikipedia]{
   \includegraphics[width=3.4in]{figs/curve_nell_wiki.pdf}
   \label{f:wiki1}
}

\subfloat[IC, New York Times]{
   \includegraphics[width=3.4in]{figs/curve_ic_nyt.pdf}
   \label{f:nyt2}
}
\subfloat[IC, Wikipedia]{
   \includegraphics[width=3.4in]{figs/curve_ic_wiki.pdf}
   \label{f:wiki2}
}

\caption{Approximate precision and recall of \sys\ and a baseline that
uses only the seed examples of the target ontology for knowledge-based
weak supervision. \sys\ consistently improves performance for two different
target ontologies, Nell and IC, and two different text corpora, New York Times
and Wikipedia.}
\label{f:all4}
\end{figure*}

%\begin{figure*}
%%\vspace*{-0.45in}
% \begin{center}
%{\resizebox*{2.5in}{!}{\rotatebox{0}
%{\includegraphics{curve_nyt.png}}} }
% \end{center}
%%\vspace*{-0.3in}
% \caption{In order to map a target relation to the background
%   ontology, one must consider a large space of possible views. In
%   this example, the target {\tt Coaches} relation maps to the
%   following expression over Freebase relations
%   $\sigma_{\mbox{PName,CName}}$ {\tt Players} $\Join$ {\tt
%   Plays4Team} $\Join$ {\tt FBCoach}. In fact, the best mapping is a
% union of this expression with similar expressions for Freebase
% Baseball and Hockey relations.
%}
%\label{f:mapping}
%%\vspace*{-0.1in}
%\end{figure*}

%\begin{figure}[t]
%\vspace*{-0.45in}
% \begin{center}
%{\resizebox*{3.3in}{!}{\rotatebox{0}
%{\includegraphics{figs/curve_nell_nyt.pdf}}} }
% \end{center}
%\vspace*{-0.3in}
% \caption{Extract facts of Nell relations, ontological smoothing relation extraction precision recall curve on New York Time articles} \label{f:nyt1}
%\vspace*{-0.1in}
%\end{figure}



%\begin{figure}[t]
%\centering \epsfig{file=curve_nyt.eps, width=3.3in}
%\caption{Ontological smoothing relation extraction precision recall
%curve on New York Time articles}\label{f:nyt1}
%\end{figure}
%
%\begin{figure}[t]
%\centering \epsfig{file=curve_wiki.eps, width=3.3in}
%\caption{Ontological smoothing relation extraction precision recall
%curve on Wikipedia articles}\label{f:wiki1}
%\end{figure}

\subsection{Overall Performance}

\subsubsection{Overall Extraction Quality}

Figure \ref{f:all4} shows precision and recall curves for our two
target ontologies, Nell and IC, as well as our two text corpora,
the New York Times and Wikipedia. Note that the graphs have been
generated using our conservative ${\textbf M}_{\text{pre-rec}}$ metric, so actual precision
and recall may be higher.
\sys\ reaches substantially higher precision and recall than our
baseline, which uses the extraction algorithm but without leveraging
the mapping to our background ontology. This is consistently true for
all tested combinations of target ontologies and text corpora.

The poor performance of our baseline may seem surprising, but can
easily be explained: There are only few seed instances for each
relation in the target ontology making it difficult to learn an
extractor. Furthermore, not every seed instance in the target
ontologies matches to a sentence in the text corpus, so that the
available number of training sentences may be even smaller, for
some relations less than 10. In contrast, ontological smoothing
generates thousands of new training instances for our relation
extractors.

Comparing the two target ontologies, we observe that our baseline is
higher for the IC relations than the Nell relations. This is likely
due to the fact that on average the IC ontology has more seed
instances (about 43) than Nell relations (about 14) per relation.
Comparing the two text corpora, we notice that \sys\'s performance
on Wikipedia is substantially higher than on the New York Times. One
reason is that Wikipedia contains more factual knowledge, and more
stylized and simpler language which simplifies the extraction task.

Perhaps surprising are the dips in precision in the low recall
range, for example in the case of IC and Wikipedia. We manually
checked the ten most confident extractions and found
that they were all marked as incorrect by our approximate
${\textbf M}_{\text{pre-rec}}$ metric, but actually all represented correct extractions of facts
which were simply not present in Freebase. We therefore believe
that if we were able to compare to the true facts contained in
our text corpus this dip would be removed.

%Figure \ref{f:nyt1} and Figure \ref{f:wiki1} shows approximate
%precision / recall curves for our system computed on entity pairs.
%These graphs test on how closely the extractions match the facts in
%gold Freebase View. The systems include the baseline that trained
%with seed instances only.

%Ontological smoothing achieves much higher precision over all range
%of recall, on both Wikipedia and New York Time datasets.


%One may interest in the low precision in the low recall range. To
%investigate it, we will manually check the ten highest confidence
%extractions produced by our system. The result show the next proves
%the error all comes the false negative in the Freebase. That is to
%say, many facts contain in the text are not included in the Freebase
%yet. These instances will be labeled negative in the test set.



%In Figure~\ref{f:wiki2} and Figure~\ref{f:nyt2}, we plot the
%precision and recall curve of the ontological smoothing relation
%extraction on Wikipedia and New York Time articles. The experiment
%settings are exactly the same as that of the Nell ontology. We can
%see the baseline performance is higher than that of the Nell
%relations in Figure \ref{f:wiki1}\ref{f:nyt1}. It is because IC
%ontology has more seed instances for every relation.

%In Figure
%\ref{f:wiki2}, there is a dip in the low recall area. The low
%precision comes from the the incompleteness of Freebase, since we
%label the positive in the test set by gold Freebase views of the
%target relation. This also show the ontology smoothed relation
%extractor can get many facts that are not existing in the Freebase,
%even with operators like join, selection and union.



%\begin{figure}[t]
%\vspace*{-0.45in}
% \begin{center}
%{\resizebox*{3.3in}{!}{\rotatebox{0}
%{\includegraphics{figs/curve_nell_wiki.pdf}}} }
% \end{center}
%\vspace*{-0.3in}
% \caption{Extract facts of Nell relations, ontological smoothing relation extraction precision recall curve on Wikipedia articles} \label{f:wiki1}
%\vspace*{-0.1in}
%\end{figure}


\subsubsection{Relation-specific Extraction Quality}
To investigate relation-specific performance, we randomly picked ten
relations of the Nell ontology and manually checked the top ten most
confident extractions returned by our system (${\textbf M}_{\text{top-K}}$).

%top ten extracted facts returned
%For every relation, we manually check top ten extracted facts return
%by our system. An entity pair is labeled correct if there are some
%evidence in the text support it.

Table~\ref{t:top10pair} presents the precision for each Nell
relation and each text corpus. The majority of relations reach high
precision at top-10: for the New York Times corpus the median is
80\%, for Wikipedia it is 90\%; the means are 74\% and 91\%,
respectively. The results show that ontological smoothing makes it
possible to learn accurate extractors from only a small number of
seed examples, across many relations.

%relations, along with statistics we computed to measure the quality
%on average. Precision is high for the majority. This proves the
%relation extraction algorithm generally performs well on the
%relations with a sufficiently large number of instances, which
%justify ontological smoothing.

\begin{table}[bt]
\begin{center}
\begin{tabular}{|c|r|c|}
\hline
\multirow{2}{*}{Relation} & \multicolumn{2}{c|}{ Precision at top 10} \\
         & NYT & Wikipedia  \\
 \hline
\emph{acquired}            &   80\%    &   80\%  \\
\emph{actorStarredInMovie} &   70\%    &   90\% \\
\emph{athletePlaysForTeam} &   100\%    &   100\% \\
\emph{bookWriter}  &   30\%    &   60\% \\
\emph{cityLocatedInCountry}    &   100\%    &   80\% \\
\emph{competesWith}    &   90\%    &   100\% \\
\emph{hasOfficeInCountry}  &   80\%   &   90\% \\
\emph{headquarteredIn} &   70\%    &   100\% \\
\emph{teamPlaysInLeague}   &   60\%   &   100\% \\
\emph{teamWonTrophy}   &   60\%    &   100\% \\
\hline
\end{tabular}
\end{center}
\caption{Precision of the ten most confident predictions by \sys\
for ten Nell relations. \sys\ reaches good performance across
relations.} \label{t:top10pair}
\end{table}

\subsection{Ontology Mapping}
Finally, we analyze the performance of our ontology mapping
component in more detail. In particular, we are interested in
knowing if our approach, which {\em jointly} maps entities,
types, and relations, outperforms a baseline which relies on
Freebase's internal search API and makes each mapping decision
separately.

We note that solving the mapping problem required finding a
joint assignment to a considerable number of variables: For Nell,
we computed truth values for 3055 entity mapping candidates,
252 type mapping candidates, and 729 relation mapping candidates.
For IC, these are 1552, 130, and 256, respectively.

We then manually labeled 707 mappings of entities in Nell to entities in Freebase (${\textbf M}_{\text{onto}}$).
\sys\ reaches a precision of 92.79\%, compared to 88.5\% for our baseline.
This corresponds to a reduction of 30\% of mapping errors. Reducing mapping
errors is important, since it leads to higher quality data for our
weakly supervised extractors.

%\subsubsection{Joint Matching}
%In this section, we investigate the performance of our joint
%matching, which is one of the major component of the ontological
%moothing.
%The advantage of our algorithm is that we can improve
%entity matching result by ontology matching, and vise versa.

%Table \ref{t:nell_entitymatch} presents the error rate of entity
%matching. We labeled 707 entities in the Nell ontology that has the
%atching entity in Freebase. We use the search API provided by
%Freebase as the baseline. The result shows our joint matching
%algorithm can reduce the error from 11.50\% to 7.21\%. The result
%hows ontology matching can improve entity matching.


%\begin{table}[bt]
%\begin{center}
%\begin{tabular}{|c|c|c|c|}
%\hline
%\# Entity & Joint Error & Freebase Error & Reduction\\
%707 & 7.21\% & 11.15\% & 30\%\\
%\hline
%\end{tabular}
%\end{center}
%\caption{The error rate of Matching Nell Entities into Freebase Entities}
%\label{t:nell_entitymatch}
%\end{table}



%\begin{table}[bt]
%\begin{center}
%\begin{small}
%\begin{tabular}{|c|l|}
%\hline
%%\multirow{2}{*}{Nell Type} &  \multirow{2}{*}{Freebase Type}\\
%%& \multicolumn{2}{c|}{ error} \\
%%         & & Joint & Search  \\
%Nell Type & Freebae Type\\
%\hline
%actor   &   /en/actor\\
%athlete &   /en/basketball\_player \ldots \\
%awardTrophy   &   /sports/sports\_championship \ldots\\
%book    &   /book/book\\
%ceo &   /en/chief\_executive\_officer \ldots \\
%city    &   /location/citytown \ldots \\
%coach   &   /en/coach \ldots \\
%company &   /business/business\_operation \ldots\\
%country &   /location/country\\
%currency    &   /finance/currency\\
%economicSector  &   /business/industry\\
%movie   &   /film/film\\
%musicArtist &   /music/musical\_group \ldots\\
%musicGenre  &   /music/genre\\
%musician    &   /en/musician \ldots\\
%musicInstrument &   /music/instrument\\
%newspaper   &   /book/newspaper\\
%product &   /business/brand\\
%radioStation    &   /broadcast/radio\_station \ldots\\
%sport   &   /sports/sport\\
%sportsEquipment &   /sports/sports\_equipment\\
%sportsGame  &   /time/event\\
%sportsLeague    &   /sports/sports\_league\\
%sportsTeam  &   /basketball/basketball\_team \ldots\\
%stadium &   /sports/sports\_facility \ldots\\
%stateOrProvince &   /location/us\_state \ldots\\
%televisionNetwork   &   /tv/tv\_network\\
%televisionStation   &   /broadcast/tv\_station \ldots\\
%visualArtForm   &   /visual\_art/visual\_art\_form \ldots \\
%visualArtist    &   /visual\_art/visual\_artist \ldots \\
%visualArtMovement   &   /visual\_art/art\_period\_movement \ldots\\
%writer  &   /en/writer \ldots\\
%\hline
%\end{tabular}
%\end{small}
%\end{center}
%\caption{Type matching result} \label{t:typematch}
%\end{table}

\begin{table*}[bt]
\begin{center}
\begin{small}
\begin{tabular}{rl}
%%\multirow{2}{*}{Relation} & \multicolumn{2}{c|}{ Top 10 Precision} \\
%%         & NYT & Wikipeida  \\
%\textbf{Target Relation} & \textbf{Mapped Relation} \\
%\hline
%\emph{acquired}                    & $\pi_{\text{businessOperation1.name,businessOperation2.name}}$ \\
%                            & $(\text{businessOperation}^1$ $\Join$ organizationChild $\Join$ organizationRelationshipChild $\Join$ $\text{businessOperation}^2$ \\
%                            & $\cup$ $\text{businessOperation}^1$ $\Join$ organizationCompaniesAcquired $\Join$ businessAcquisitionCompanyAcquired $\Join$ $\text{businessOperation}^2)$  \vspace{0.06in}\\
%\emph{athleteCoach} & $\pi_{\text{1.name,2.name}}$ \\
%&        $(\text{baseballPlayer}^1$ $\Join$ baseballCurrentTeam $\Join$ baseballRosterPositionTeam $\Join$ baseballManager $\Join$ $\text{coach}^2$\\
%& $\cup$ $(\text{footballPlayer}^1$ $\Join$ footballTeam $\Join$ footballRosterPositionTeam $\Join$ footballTeamHeadCoach $\Join$ $\text{footballCoach}^2$\\
%& $\cup$ $(\text{basketballPlayer}^1$ $\Join$ drafted $\Join$ draftedTeam $\Join$ basketballTeamCoach $\Join$ $\text{coach}^2$  \vspace{0.06in}\\
%\emph{bookWriter} & $\pi_{\text{book.name,author.name}}$ \\
% & $(\text{book}$ bookWrittenWorkAuthor $\Join$ $\text{author}$ \\
%  & $\cup$ $\text{book}$ $\Join$ winAward $\Join$ awardWinner $\Join$ $\text{author}$  \vspace{0.06in}\\
%  %& $\cup$$\text{book}$ $\Join$ bookCharacters $\Join$ fictionalUniverseCharacterCreatedBy $\Join$ $\text{author})$  \vspace{0.06in}\\
%\emph{headquarteredIn} & $\pi_{\text{1.name,citytown.name}}$ \\
%&   $(\text{businessOperation}$ $\Join$ organizationHeadquarters $\Join$ locationMailingAddressCitytown $\Join$ $\text{citytown})$\vspace{0.06in}\\
%\emph{stadiumLocatedInCity} & $\pi_{\text{sportsFacility.name,cityTown.name}}$ \\
%    &  $\cup$ $\text{sportsFacility}^1$ $\Join$ locationContainedby $\Join$ cityTown \\
%    & $\cup$ $\text{olympicVenue}^1$ $\Join$ locationContainedby $\Join$ cityTown \\\hline


\textbf{Target Relation} & \textbf{Mapped Relation} \\
\hline
\emph{acquired}                    & $\pi_{\text{1.name,2.name}}(\text{businessOperation}^1$ $\Join$ organizationChild $\Join$ organizationRelationshipChild $\Join$ $\text{businessOperation}^2)$ \\
                            & $\cup$ $\pi_{\text{3.name,4.name}}(\text{businessOperation}^3$ $\Join$ organizationCompaniesAcquired $\Join$ businessAcquisitionCompanyAcquired $\Join$ $\text{businessOperation}^4)$  \vspace{0.06in}\\
\emph{athleteCoach}
&        $\pi_{\text{1.name,2.name}}(\text{baseballPlayer}^1$ $\Join$ baseballCurrentTeam $\Join$ baseballRosterPosTeam $\Join$ baseballManager $\Join$ $\text{coach}^2)$\\
& $\cup$ $\pi_{\text{3.name,4.name}}(\text{footballPlayer}^3$ $\Join$ footballTeam $\Join$ footballRosterPosTeam $\Join$ footballTeamHeadCoach $\Join$ $\text{footballCoach}^4)$\\
& $\cup$ $\pi_{\text{5.name,6.name}}(\text{basketballPlayer}^5$ $\Join$ drafted $\Join$ draftedTeam $\Join$ basketballTeamCoach $\Join$ $\text{coach}^6)$  \vspace{0.06in}\\
\emph{bookWriter} & $\pi_{\text{1.name,2.name}}(\text{book}^1$ $\Join$ bookWrittenWorkAuthor $\Join$ $\text{author}^2)$ \\
  & $\cup$ $\pi_{\text{3.name,4.name}}(\text{book}^3$ $\Join$ winAward $\Join$ awardWinner $\Join$ $\text{author}^4)$  \vspace{0.06in}\\
  %& $\cup$$\text{book}$ $\Join$ bookCharacters $\Join$ fictionalUniverseCharacterCreatedBy $\Join$ $\text{author})$  \vspace{0.06in}\\
\emph{headquarteredIn} & $\pi_{\text{1.name,2.name}}(\text{businessOperation}^1$ $\Join$ organizationHeadquarters $\Join$ locationMailingAddressCitytown $\Join$ $\text{citytown}^2)$\vspace{0.06in}\\
\emph{stadiumLocInCity} & $\pi_{\text{1.name,2.name}}(\text{sportsFacility}^1$ $\Join$ locationContainedby $\Join$ $\text{cityTown}^2)$ \\
    & $\cup$ $\pi_{\text{3.name,4.name}}(\text{olympicVenue}^3$ $\Join$ locationContainedby $\Join$ $\text{cityTown}^4)$ \\\hline
\end{tabular}
\end{small}
\end{center}
\caption{\sys\ Ontology mapping result on 5 Nell relations, with project, join, and union operators.}
\label{t:matching}
\end{table*}


%\begin{table*}[bt]
%\begin{center}
%\begin{small}
%\begin{tabular}{|c|l|c|c|}
%\hline
%%\multirow{2}{*}{Relation} & \multicolumn{2}{c|}{ Top 10 Precision} \\
%%         & NYT & Wikipeida  \\
%Relation & \multicolumn{3}{c|}{ Freebase View} \\
%& relation & argument1 & argument2  \\
%\hline
%acquired                    & organization\_child  $\bigcup$ organization\_companies\_acquired  & business operation  & business operation  \\\hline
%actorStarredInMovie & actor\_perform\_film $\bigcup$ award\_nominee\_nominated\_for   & actor   &   film\\
%& $\bigcup$ award\_winner\_honored\_for & & \\\hline
%athleteCoach & ice\_hockey\_player\_current\_team $\Join$ team\_coach   &ice\_hockey\_player& coach \\
%& $\bigcup$ sports\_player\_team $\Join$ team\_head\_coach & sports\_player   &   coach \\
%& $\bigcup$ american\_football\_player\_current\_team $\Join$current\_head\_coach & football\_player & coach\\\hline
%athleteHomeStadium & drafted\_athlete\_team$\Join$team\_arena\_stadium & sports\_player  &   sports\_facility \\\hline
%athletePlaysForTeam & ice\_hockey\_player\_current\_team $\bigcup$ sports\_player\_team & sports\_player \ldots& basketball\_team \ldots \\
%& $\bigcup$  american\_football\_player\_current\_team & & \\\hline
%athletePlaysSport & professional\_athlete\_played\_sports & sports\_player \ldots  & sport \\\hline
%bookWriter  & book\_written\_work\_author & book    &   writer\\\hline
%ceoOf   & person\_employment\_history\_tenure\_company & chief\_executive\_officer   &   business\_operation \\\hline
%cityLocatedInCountry   & location\_containedby & citytown    &   country \\\hline
%cityLocatedInState & location\_contains$\Join$location\_containedby & citytown    &   us\_state \ldots \\\hline
%coachWonTrophy  & basketball\_coach\_team$\Join$team\_conference\_league & coach    &   sports\_championship \\\hline
%coachesInLeague & ice\_hockey\_coach\_current\_team $\Join$ team\_league    & coach   &   sports\_league \\
%& $\bigcup$ basketball\_coach\_team $\Join$ team\_league & & \\
%& $\bigcup$ american\_football\_coach\_current\_team $\Join$team\_league  & &\\\hline
%coachesTeam & basketball\_coach\_team & coach   &   basketball\_team \ldots \\\hline
%companyEconomicSector  & organization\_child$\Join$business\_operation\_industry & business\_operation &   industry \\\hline
%currencyCountry & location\_country\_currency\_used & country &   currency \\\hline
%hasOfficeInCity & organization\_headquarters\_citytown & business\_operation &   citytown \\\hline
%hasOfficeInCountry  & business\_employer\_employees\_nationality & business\_operation &   country \\\hline
%headquarteredIn & organization\_headquarters\_citytown & business\_operation &   citytown \\\hline
%leagueStadiums & league\_teams$\Join$team\_arena\_stadium & sports\_league  &   sports\_facility \\\hline
%musicArtistGenre  &  music\_artist\_genre & musical\_group  &   genre \\\hline
%musicianInMusicArtist  &  music\_group\_member\_membership & musician    &   musical\_group \\\hline
%musicianPlaysInstrument & music\_group\_member\_instruments\_played & musician    &   instrument \\\hline
%newspaperInCity & book\_newspaper\_headquarters\_citytown & newspaper   &   citytown \\\hline
%producesProduct & business\_consumer\_company\_brands & business\_operation &   brand \\\hline
%radioStationInCity & broadcast\_area\_served & radio\_station  &   citytown \\\hline
%sportUsesEquipment & sports\_sport\_related\_equipment & sport   &   sports\_equipment \\\hline
%sportUsesStadium    &   sport\_teams$\Join$sports\_team\_location$\Join$location\_contains & sport   &   sports\_facility \\\hline
%sportsGameLoser &   sports\_championship\_event\_runner\_up & event   &   basketball\_team \ldots \\\hline
%sportsGameSport &   sports\_championship\_event\_runner\_up$\Join$sports\_team\_sport & event   &   sport \\\hline
%sportsGameWinner    &   sports\_championship\_event\_champion & event   &   basketball\_team \\\hline
%stadiumLocatedInCity    &   location\_containedby & sports\_facility    &   citytown \\\hline
%stateHasCapital &   location\_us\_state\_capital $\bigcup$ location\_in\_state\_administrative\_capital & us\_state \ldots   &   citytown \\
% & $\bigcup$ location\_mx\_state\_capital $\bigcup$ location\_country\_capital &  \\\hline
% stateLocatedInCountry   &   location\_administrative\_division\_country & us\_state   &   country \\\hline
%teamHomeStadium &   sports\_team\_arena\_stadium & basketball\_team    &   sports\_facility \\\hline
%teamPlaysAgainstTeam    &   american\_football\_team\_away\_games$\Join$home\_team & basketball\_team \ldots   &   basketball\_team \ldots \\\hline
%teamPlaysInCity &   sports\_team\_location & basketball\_team    &   citytown \\\hline
%teamPlaysInLeague   &   sports\_team\_league & basketball\_team    &   sports\_league \\\hline
%teamPlaysSport  &   sports\_team\_sport & basketball\_team    &   sport \\\hline
%teamWonTrophy   &   sports\_team\_championships$\Join$sports\_championship\_event &    basketball\_team    &   sports\_championship \\\hline
%TVStationAffiliatedWith &   broadcast\_tv\_station\_affiliations$\Join$duration\_network & tv\_station &   tv\_network \\\hline
%televisionStationInCity &   broadcast\_area\_served & tv\_station &   citytown \\\hline
%visualArtistArtForm &   visual\_art\_artist\_forms & visual\_artist  &   visual\_art\_form \\\hline
%visualArtistArtMovement &   visual\_art\_artist\_associated\_periods\_or\_movements & visual\_artist  &   visual\_art\_form \\\hline
%\hline
%\end{tabular}
%\end{small}
%\end{center}
%\caption{Manual labeled top 10 precision for ten relations}
%\label{t:matching}
%\end{table*}


Table \ref{t:matching} shows the results of mapping five Nell
relations to Freebase.
%Freebase
%View column are the $V_f=(r_f,t_f^1,t_f^2)$ tuples we generated.
%$\Join$ means join operation, $\bigcup$ means union operation. Type
%constrains on the two arguments imply selection on the returned
%entity pairs. For space limitation, we only list one Freebase View
%for every relation, who has the largest $w1-w2$ value. For type
%constraints, we use ``\ldots" to indicate multiple matched types.
%Some Freebase relation has very long name, we rewrite some of them
%to make easy reading.
%From this table, we can see our ontology matching algorithm
%performances
\sys\ is able to accurately recover relations composed by multiple
select, project, join, and union operations.

For the IC target ontology, \sys\ correctly maps 8 out of 9
relations. The results for the remaining relation
\emph{hasSubOrganization} are partially correct. The results show
that our ontology mapping algorithm returns few incorrect mappings,
thus ensuring the robustness of the overall system.


%IC domain: By investigating the ontology matching results, \sys\ gets correct
%matching for 8 relations out of 9. The matching of
%\texttt{hasSubOrganization} is partially correct. Since we do not
%have any negative instance here, the matching results show the
%robustness of our algorithm.


%\subsection{IC Ontology}
%In this section we will report the ontological smoothing performance
%over relations of IC ontology, along with the ontology matching
%results.

%\begin{figure}[t]
%\vspace*{-0.45in}
% \begin{center}
%{\resizebox*{3.3in}{!}{\rotatebox{0}
%{\includegraphics{figs/curve_ic_wiki.pdf}}} }
% \end{center}
%\vspace*{-0.3in}
% \caption{Extract facts of IC relations, ontological smoothing relation extraction precision recall curve on Wikipedia articles} \label{f:nyt2}
%\vspace*{-0.1in}
%\end{figure}

%In Figure~\ref{f:wiki2} and Figure~\ref{f:nyt2}, we plot the
%precision and recall curve of the ontological smoothing relation
%extraction on Wikipedia and New York Time articles. The experiment
%settings are exactly the same as that of the Nell ontology. We can
%see the baseline performance is higher than that of the Nell
%relations in Figure \ref{f:wiki1}\ref{f:nyt1}. It is because IC
%ontology has more seed instances for every relation. In Figure
%\ref{f:wiki2}, there is a dip in the low recall area. The low
%precision comes from the the incompleteness of Freebase, since we
%abel the positive in the test set by gold Freebase views of the
%target relation. This also show the ontology smoothed relation
%extractor can get many facts that are not existing in the Freebase,
%even with operators like join, selection and union.

%\begin{figure}[t]
%\vspace*{-0.45in}
% \begin{center}
%{\resizebox*{3.3in}{!}{\rotatebox{0}
%{\includegraphics{figs/curve_ic_nyt.pdf}}} }
% \end{center}
%\vspace*{-0.3in}
% \caption{Extract facts of IC relations, ontological smoothing relation extraction precision recall curve on New York Times articles} \label{f:wiki2}
%\vspace*{-0.1in}
%\end{figure}
