\section{Related Work}
\label{s:related}

While Section~\ref{s:ontomap} discussed previous approaches to ontology
mapping, we now review work on using background knowledge to improve
extractor learning and the exploiting ontologies for relation extraction.

\subsection{Context and Previous Work}
\label{s:ontomap}

Dhamankar et al.\cite{Dhamankar04imap:discovering} define schema
{\em matching} to be the first step in the process of constructing a
{\em mapping}, \ie\ a function converting descriptions of objects in
one ontology into corresponding descriptions in another. We consider
ontologies comprised of {\em types} (unary relations, also known as
concepts, organized in a taxonomy) and binary {\em relations}.
Relations may connect two types (\eg, {\em Parent})
 or may link a type to a primitive value, such as numbers, dates and
strings (\eg, {\em BirthDate}), which are often called {\em
attributes} or {\em properties}. Each type is associated with a set
of instances, called {\em entities}.

A {\em mapping} from a background ontology \B{\cal O} onto a target
\T{\cal O} is a set of partial functions whose ranges are entities,
types and relations in \T{\cal O}. Ullman~\cite{ullman-icdt97} noted
that these mappings can be thought of as view definitions, \eg\
defined using SQL operations such as selection, projection, join and
union. We adopt this perspective as shown in Example~\ref{e:coach}.

\begin{figure}[t]
\begin{center}
\includegraphics[width=3.2in]{figs/relatedwork}
\end{center}
\caption{Classification of selected ontology matching systems, based
on \cite{euzenat2007b}.} \label{fig:examplealgorithms}
\end{figure}

Euzenat \&\ Shvaiko~\cite{euzenat2007b} and Rahm \&\
Bernstein~\cite{Rahm01asurvey} carve the set of approaches for
ontology matching into several dimensions. The input of the matching
algorithm can be {\em schema-based}, {\em instance-based} or {\em
mixed}. The output can be an {\em alignment} (\ie, a one-to-one
function between objects in the two ontologies) or a {\em complex
mapping} (\eg, defined as a view).
Figure~\ref{fig:examplealgorithms} plots some previous methods along
these dimensions.

The majority of existing systems focus on the alignment problem.
Doan \etal~\cite{doan-www02} present GLUE, which casts alignment of
two taxonomies into classification and uses learning techniques. The
more recent system by Wick \& McCallum~\cite{wick-kdd08} applies a
learning approach to a single probabilistic model that considers all
matching decisions jointly. While these system operate on instances,
others align schemas: Cupid~\cite{Madhavan01genericschema} matches
tree-structures in three phases, that include linguistic matching,
structural matching, and aggregation.
COMA++\cite{Aumueller05schemaand} enables parallel composition of
matching algorithms. Niepert \etal~\cite{niepert-aaai10} propose a
joint probabilistic model based on Markov logic.
QOM~\cite{Ehrig04qom} matches both, instances and schemas, and is
able to trade off between efficiency and quality.

Far less work has looked at finding complex mappings between
ontologies. Artemis~\cite{castano:artemis:} creates global views
using hierarchical clustering of database schema elements.
MapOnto~\cite{An06discoveringthe} produces mapping rules between two
schemas expressed as Horn clauses. Miller \etal's tool
Clio~\cite{miller-vldb00}\cite{Miller01theclio} generates complex
SQL queries as mappings, and ranks these by heuristics.

For ontological smoothing to work, it is essential that one can find
complex mappings involving selections, projections, joins, and
unions. While MapOnto and Clio handle complex mappings, they are
semi-automatic tools that depend on user guidance. In contrast, we
designed \sys\ to be fully autonomous. Unlike the other two, \sys\
uses a propabilistic representation and performs joint inference to
find the best mapping.

\subsection{Extraction with Background Knowledge}

It has long been recognized that background knowledge can compensate for
scare training data in machine learning.\comment{A popular way to inject
  background knowledge into an extractor is by providing small sets of
  seeds for each target label. Thelen \& Riloff~\cite{thelen-emnlp02}, for
  example, use seeds to bootstrap a system for extracting semantic classes
  from text. Similarly, Haghighi \& Klein~\cite{haghighi-hltnaacl06}
  propose the use of a small number of prototypical examples for each
  target label in a part of speech tagging task.

  Others have suggested labeling not only seed examples, but more generally
  labeling features. Collins \& Singer~\cite{collins-emnlp99} propose an
  unsupervised technique for named entity classification, which needs only
  seven labeled features. Smith \& Eisner \cite{smith-acl05} describe a
  learning technique for sequence labeling with labeling features. Druck et
  al.~\cite{druck-sigir08} apply feature labeling to text classification.

  Several works generalize from feature labels to broader sets of
  constraints.}
One such method is the use of constraints.  Chang et al. \cite{chang-acl07}
propose a technique for injecting prior knowledge into a semi-supervised
learning algorithm as soft constraints.  Constraints on two extraction
tasks include the feature labels, \ie, the relevance of words to particular
labels, and also the number of times a label may appear.

More recent techniques incorporate background knowledge as expectations on
the posterior distributions of an extractor model.  Bellare \& McCallum
\cite{bellare-emnlp09} obtain a 35\% error reduction on a citation
extraction task by adding expectations over how citation texts may align to
a citation database and how a few features are highly indicative of a
particular label.  Chen \etal~\cite{chen-acl11} propose a technique for
relation discovery which uses expectations over the proportion of relation
mentions matching certain syntactic patterns, the number of times a
relation is instantiated, and the number of relation instances a single
word can indicate.

While these approaches typically assume a small amounts of background
knowledge supplied by a user, other approaches have tried to leverage
existing resources as background knowledge.  Stevenson \& Greenwood
\cite{stevenson-acl05} use WordNet to retrieve semantic relationships
between lexical items in order to learn more general information extraction
patterns.  Cohen and Sarawagi~\cite{cohen-kdd04} describe a technique for
incorporating external dictionaries in discriminative sequence
taggers. Other works by Wang \etal~\cite{wang-cikm09} and Hoffmann
\etal~\cite{hoffmann-acl10} propose techniques to leverage the vast amount
of structured lists on Web pages, in order to learn extractors with
enhanced generalization ability. Both approaches apply a semi-supervised
algorithm to learn extractor-specific lexicons.
%* agreement of NER and RE: Roth/Yih04, Rush/Sontag/Collins/Jaakkola10 (Dual Decomp.)

\sys\ is different from these approaches because it automatically generates
the mapping to its background ontology, before applying semi-supervised
techniques.

\subsection{Using Ontologies for Extraction}

A great deal of research has looked at automatically populating ontologies
through extraction.  A popular approach is by distant supervision, where
existing objects in an ontology are heuristically aligned to a large text
corpus, in order to create training data for an extractor. For example, Wu
\& Weld~\cite{wu-cikm07} and Hoffmann \etal~\cite{hoffmann-acl10} use
Wikipedia`s infobox ontology for distant supervision; Mintz
\etal~\cite{mintz-acl09}, Riedel \etal~\cite{riedel-ecml10}, and Hoffmann
\etal~\cite{hoffmann-acl11} use Freebase.

Some work has proposed to leverage the hierarchical structure of an
ontology for smoothing parameter estimates of a learned model. McCallum et
al.~\cite{mccallum-icml98} call this method {\em shrinkage} and demonstrate
a 29\% error reduction in a text classification task. Wu et
al.~\cite{wu-kdd08} apply shrinkage to relation extraction for Wikipedia's
infobox ontology, again showing large improvements.  In this case, the
hierarchical structure was not directly available, first necessitating
ontological refinement~\cite{wu-www08}.
%\cite{wu-www08,hoffmann-acl10}

Another direction applies reasoning over existing and new knowledge in
order to disambiguate words and learn extraction
patterns. Sofie~\cite{suchanek-www09} and Prospera~\cite{nakashole-wsdm11}
jointly perform pattern matching, word sense disambiguation and ontological
reasoning in a unified model using weighted MaxSAT for inference. Similarly,
Nell~\cite{carlson-wsdm10} couples the semi-supervised training of many
extractors for different categories and relations through a variety of
constraints.
%SOFIE difference to OnSmoo: they used Yago (2 million entities, 20 million facts as seeds), we use only tiny subset of user-provided seeds

Wimalasuriya \& Dou \cite{wimalasuriya-cikm09} propose using multiple
ontologies for extraction. Their system takes a mapping between the
ontologies as input, and combines the output from extractors which have
been learned separately learned on the ontologies.

\sys\ differs from these approaches: The relations \sys\ is able to extract
are not limited to those in its background ontology. Instead, it
automatically creates new relations by composing select, project, join,
and union operations.


%\cite{michelson-wsdm11}
%extract tables from web data: focus on data integration, joins, selections, use of taxonomy rather than flat seeds

%\nocite{hoffmann-acl11}
%extractors learned from Freebase ontology, ~openie?
