%
% File acl2010.tex
%
% Contact  jshin@csie.ncnu.edu.tw or pkoehn@inf.ed.ac.uk
%%
%% Based on the style files for ACL-IJCNLP-2009, which were, in turn,
%% based on the style files for EACL-2009 and IJCNLP-2008...

%% Based on the style files for EACL 2006 by 
%%e.agirre@ehu.es or Sergi.Balari@uab.es
%% and that of ACL 08 by Joakim Nivre and Noah Smith

\documentclass[11pt]{article}
\usepackage{acl2010}
\usepackage{times}
\usepackage{url}
\usepackage{latexsym}
%\setlength\titlebox{6.5cm}    % You can expand the title box if you
% really have to
\usepackage{xytree}
\usepackage{color}
\usepackage{multirow}
\usepackage{amsmath}
\usepackage{graphicx}


\title{A Survey of Paraphrase and Inference Rules Acquisition}

\author{Nam Khanh Tran, Manfred Pinkal\\
  Department of Computational Linguistics\\
  Saarland University\\
  {\tt \{namkhanh, pinkal\}@coli.uni-saarland.de}}

\date{09.09.2011}

\begin{document}
\input{./titlepage/titlepage.tex}

\begin{abstract}
	The task of paraphrasing is an important part of natural language processing
	and is being increasingly employed to improve the performance of several NLP
	applications. In this paper, we review two approaches which learn paraphrase
	patterns automatically from text corpora and two approaches which improve  the
	quality of such patterns by filtering out incorrect rules and determining their
	directionality. We also address the evaluation problem of paraphrasing and
	present a simple method to assess the correctness of instances in order to
	identify the validity of the rules. 
\end{abstract}

\newpage
\pagestyle{plain}

\section{Introduction}
\label{sec:intro}
				
	In natural language, the concept of paraphrasing is most generally defined on
	the basis of the principle of semantic equivalence. That is, a paraphrase is an
	alternative surface form in the same language expressing the same semantic
	content as the original form. For example, the phrase \textit{wrote} and
	\textit{the author of} are considered as being paraphrastic in the sentences
	(a) and (b).
	\begin{itemize}
		\item[(a)] Francis Scott Key wrote the "Star Spangled Banner"
		\item[(b)] Francis Scott Key is the author of "Star Spangle Banner"
	\end{itemize}	
	Paraphrases may occur at several levels. Whereas individual lexical items
	having the same meaning are usually referred to as lexical paraphrases or more
	commonly synnonyms, the term phrasal paraphrase refers to phrasal fragments
	sharing the same semantic content. Two sentences that represent the same
	semantic content are termed sentential paraphrases. Text entailment is also a
	similar phenomenon, in which the presence of one expression licenses the
	validity of another.
		
	% applications
	\vspace*{0.3cm}
	Paraphrases and inference rules are known to improve performance in various
	natural language processing applications. One of the most common applications
	of paraphrasing is the automatic generation of query variants for submission to
	information retrieval systems or of patterns for submission to information
	extraction systems \cite{Metzler:07,Harabagiu:06}. In text summarization,
	pharaphrase patterns help to evaluate systems automatically and detect the
	repetitions; thus improve the systems' performance \cite{Barzilay:99}. In
	addition, paraphrasing has also been applied to directly improve the translation and
	evaluation processes in machine translation \cite{Callison-Burch:06}.
	
	\vspace*{0.3cm}
	Traditionally, paraphrases and inference rules can be created manually.
	However, this approach is very problematic since it is expensive and difficult
	for humans to list all possible paraphrases of a particular phrase. Therefore,
	such manual collections of patterns are generally incomplete. In recent work,
	some approaches are proposed to learn paraphrases and inference rules
	automatically from copora by finding matching sentences
	\cite{Lin:01,Pang:03,Szpektor:04}. Learning paraphrases and inference rules
	from such redundant copora is highly accurate but not feasible on a large scale
	as these kind of copora is very limited (except \cite{Szpektor:04} which uses
	the Web as a large copora).
	
	\vspace*{0.3cm}
	This paper describes the DIRT approach which learns paraphrase patterns from
	regular corpora \cite{Lin:01} and the TEASE system that extends the DIRT
	approach to use the largest available corpus, the Web for this task
	\cite{Szpektor:04}. The paraphrase patterns that two approaches try to learn
	are defined as templates with linked variables called anchors, for example $X$
	{\em write} $Y$ $\Leftrightarrow$ $X$ {\em is the author of} $Y$. These anchors
	are lexical items describing the context of the template in a sentence. The two
	methods create a large resource of paraphrases or inference rules that can be
	used by appplications. However, along with valid paraphrases, these methods
	also produce a large amount of incorrect patterns. Consequently, this hinders
	applications to use them directly. This problem is addressed by
	\cite{Pantel:07} and \cite{Bhagat:07}. They aim to filter out incorrect
	inference rules from the resource by making use of the selectional preferences
	of the relation or predicate. The selectional preferences of a relation are
	defined as the set of semantic classes that its arguments may belong to.
	Whereas \cite{Pantel:07} assumes that some inference rules are correct only for
	certain arguments of paraphrase and presents the inferential selectional
	preferences model for determining the validity of the inference rules,
	\cite{Bhagat:07} try to determine the plausible inference rules and then
	identify the directionality of these rules.
	
	\vspace*{0.3cm}
	In this paper, we also describe the evaluation problem of paraphrase
	acquisition. Whereas other language processing tasks have multiple annual
	community-wide evaluations using standard test sets and manual as well as
	automated metrics, the task of automated paraphrasing does not. However, most
	recent work on paraphrase acquisition does include direct evaluation itself.
	A direct approach is to subtitute the paraphrase in the place of the original
	phrase in its sentence and then present both sentences to human judges. The
	basic idea of such subtitution-based evaluation is that items deem to be
	paraphrases behave as such only in some contexts and not others. Following that
	line, \cite{Szpektor:07} present the instance-based evaluation method wherein
	human judges are not only presented with the inference rule but also with a
	sample of sentences that match its left-hand side and then asked to assess
	whether the rule holds under each specific example.
	
	\vspace*{0.3cm}
	Technically, in the \cite{Szpektor:07}, human judges are required to determine
	the correctness of examples given the right-hand side of a inference rule, then
	assess whether the rule holds. In this paper, we present a simple method which
	utilizes Web snippets to assess the correctness of instances automatically.
	We assume that if an instance occurs frequently in the Web snippets, it is
	confidently to be correct. We then present experiments with the	inference fules
	from DIRT rule database and the instances from RTE-2 dataset. 
	
	\vspace*{0.3cm}
	The rest of this paper is organized as follows. The next section describes two
	methods, DIRT and TEASE for acquiring paraphrases. The section 3 presents the
	approaches which filter out incorrect inference rules and identify their
	directionality. Then, the evaluation methods are described in the section 4.
	The section 5 presents our proposal method to assess the correctness of
	examples and finally we conclude this paper in section 6.
	
\section{Paraphase and Inference Rules Acquisition}

	\cite{Lin:01} discuss how to measure distributional similarity over dependency
	tree paths (DIRT) in order to induce generalized paraphrase templates such as:
	\begin{itemize}
		\item[] X found solution to Y $\Leftrightarrow$ X solved Y
		\item[] X caused Y $\Leftrightarrow$ Y is blamed on X
	\end{itemize}
	Technically, these templates represent inference rules, such that the
	right-hand side can be inferred from the left-hand side but is not semantically
	equivalent to it, for example {\em "X caused Y" $\approx$ "Y is blamed on X"}
	is an inference rule even though they do not mean exactly the same thing. This
	work is primarily concerned with inducing such rules rather than strict paraphrases.
	Whereas single links between nodes in a dependency tree represent direct semantic
	relationships, a sequence of links, or a path represents indirect semantic relationships
	between two content words. Here, a path is named by concatenating the dependency
	relationships and lexical items along the path but excluding the lexical items at
	the two ends. In this way, a path can be thought of as a pattern with variables
	at either ends.
	\begin{figure}[h]
		\centering
		\begin{tabular}{c}
			\hfil \xy
			\xytree[1]{
				& \xynode{\textbf{find}}
					\xyconnect [->]{1,-1}"|{\textit{\small subj}}"
					\xyconnect [->]{1,1}"|{\textit{\small obj}}"
				\\
				\xynode{\textbf{John}} & & \xynode{\textbf{solution}}
				\xyconnect [->]{1,-1}"|{\textit{\small det}}"
				\xyconnect [->]{1,1}"|{\textit{\small to}}"\\
				& \xynode{a} & & \xynode{\textbf{problem}}
				\xyconnect [->]{1,0}"|{\textit{\small det}}"\\
				& & & \xynode{the} &
			}		
			\endxy	\\ \\

			N:subj:V$<$\textbf{find}$>$V:obj:N$<$\textbf{solution}$>$:N:to:N \\
			"\textbf{X} found solution to \textbf{Y}" \\
			(a)
		\end{tabular}
		\begin{tabular}{c}
			\hfil \xy
			\xytree[1]{
				& \xynode{\textbf{solved}}
					\xyconnect [->]{1,-1}"|{\textit{\small subj}}"
					\xyconnect [->]{1,1}"|{\textit{\small obj}}"
				\\
				\xynode{\textbf{John}} & & \xynode{\textbf{problem}}
				\xyconnect [->]{1,0}"|{\textit{\small det}}" \\
				& & \xynode{the} &  
			}		
			\endxy	 \\ \\ \\ \\ \\
			N:subj:V$<$\textbf{solve}$>$V:obj:N \\
			"\textbf{X} solved \textbf{Y}" \\
			(b)
		\end{tabular}
		\caption{Two different dependency paths are considered paraphrastic because
		the corresponding slots are filled by the same words (John and problem) in
		both the paths}
		\label{fig:dep paths}
	\end{figure}
	
	For example, the first dependency tree in Figure \ref{fig:dep paths}, the dependency
	path between the node \textit{John} and the node \textit{problem} can be extracted as 
	follows. We start at the node \textit{John} and see that it is connected to a verb
	through the dependency relation \textit{subject}, so we append that information to
	the path. The next lexical item in the tree is the verb (\textit{found}) and we append
	its lemma (\textit{find}) to the path. Then we append the semantic relation \textit{
	object} connecting a verb to a noun. This process continues until we reach to the other
	slot which is the word \textit{problem} in this example. In a path, the relations on
	either ends of a path are referred to as \textbf{SlotX} and \textbf{SlotY}, the tuples
	(\textit{SlotX,John}) and (\textit{SlotY, problem}) are features of the path and the
	dependency relations inside in the path that are not slots are called \textbf{internal
	relations}. Intuitively, one can imagine a path to be a complex representation of the
	pattern \textit{"X find solution to Y"}, where X and Y are variables.
	
	\vspace*{0.3cm}
	\cite{Lin:01} impose a set of constraints on the paths to be extracted from the text
	in order to reduce the number of distinct paths because they propose that
	most meaningful inference rules involve only paths that satisfy these conditions and they
	significantly reduce the amount of computation:
	\begin{itemize}
		\item The vaiables must be instantiated by entities or slot fillers must be nouns
		\item Only consider the dependency relation conecting two content words
		\item The frequency count of an internal relation must exceed a threshold
	\end{itemize}
	With the representation for a path, \cite{Lin:01} propose an extended version of the
	distributional similarity hypothesis (called \textbf{Extended Distributional Hypothesis}):
	\footnote{If similar sets of words fill the same variables for two different patterns, 
	then the patterns may be considered to have similar meaning}\textit{If two paths tend 
	to occur in similar contexts, the meanings of the paths	tend to be similar}. 
	For example, Table \ref{tab:slot fillers} lists a set of example pairs of words connected
	by the paths N:subj:V$<$\textbf{find}$>$V:obj:N:$<$\textbf{solution}$>$:N:to:N and 
	N:subj:V$<$\textbf{solve}$>$V:obj:N. As shown in Table \ref{tab:slot fillers}, there 
	are many overlaps between the corresponding slot fillers of the two paths. Therefore, by
	Extended Distributional Hypothesis, the two paths are considered to have similar meanings.
	
	\begin{table}[h]
		\centering
		\caption{Sample slot fillers for two dependency paths}	
		\begin{tabular}{c c c c}
		\\
		\hline
		\multicolumn{2}{c}{{\em X find solution to Y}} & \multicolumn{2}{c}{{\em X solve Y}} \\
		SlotX & SlotY & SlotX & SlotY \\
		\hline
		committee & civil war & government & problem \\
		government & problem & reseacher & mystery \\
		he & problem & committe & problem \\
		legislator & budget deficit & petition & woe \\
		commission & strike & clout & crisis \\
		government & crisis & sheriff & murder \\
		\hline
		\end{tabular}
		\label{tab:slot fillers}
	\end{table}	
	
	\vspace*{0.3cm}
	\cite{Lin:01} use newspaper texts as their input corpus and use Minipar parser to create
	dependency parsed tree for all the sentences in the corpus in the pre-processing step.
	Algorithm 2 provides the details of the rest of the process: Steps 1 and 2 extract paths
	and compute their distributional properties, and Steps 3 and 4 extract pairs of paths
	which are similar. Set of paths which are considered to have similar meanings are returned
	by the algorithm.
	
	\begin{table}
	\caption{\textbf{Algorithm 2 \cite{Lin:01}}: Produce inference rules from a parse corpus}
	\line(1,0){430}
	\begin{itemize}
		\item[\small 1.] Extract paths of the form described above from the parsed corpus
		\item[\small 2.] For each tuple of the form ($p,s,w$) where $p$ is a path,
		$s$ is one of the two slots in $p$ and $w$ is a word that instantiates in that 
		slot, calculate the following two quantities:
			\begin{itemize}
				\item[(a)] A count $C(p,s,w)$ indicating how many times word w appeared in
				slot $s$ in path $p$
				\item[(b)] The mutual information $I(p,s,w)$ indicating the strength of
				association between slot $s$ and word $w$ in path $p$:
				\[
					I(p,s,w) = log \left( \dfrac{C(p,s,w) \displaystyle \sum_{p',w'} C(p',s,w')}
					{\displaystyle \sum_{w'} C(p,s,w') \displaystyle \sum_{p'} C(p',s,w)} \right)
				\]
			\end{itemize}
		\item[\small 3.] \textbf{for} each extracted path $p$ \textbf{do}
			\begin{itemize}
				\item[] Find all instances ($p,w_1,w_2$) such that $p$ connects the words $w_1$
				$w_2$
				\item[] \textbf{for} each such instance \textbf{do}
					\begin{itemize}
						\item[] Update $C(p,SlotX,w_1)$ and $I(p,SlotX,w_1)$
						\item[] Update $C(p,SlotY,w_2)$ and $I(p,SlotY,w_2)$
					\end{itemize}
				\item[] \textbf{end for}			
			\end{itemize}
		\item[] \textbf{end for}
		\item[\small 4.] \textbf{for} each extracted path $p$ \textbf{do}
			\begin{itemize}
				\item[\small 4.1] Retrieve all the candidate paths $C$ which share at least one
				 feature with $p$
				\item[\small 4.2] Prune candidates from $C$ based on feature overlap with $p$
				\item[\small 4.3] Compute the similarity between $p$ and the candidates in $C$
				\[
					sim(slot_1,slot_2) = \dfrac{\displaystyle \sum_{w \in T(p_1,s) \cap T(p_2,s)}
					I(p_1,s,w)+I(p_2,s,w)}
					{\displaystyle \sum_{w \in T(p_1,s)} I(p_1,s,w) + \displaystyle \sum_{w \in
					T(p_2,s)} I(p_2,s,w)}
				\]
				\[
					S(p_1,p_2) = \sqrt{sim(SlotX_1,SlotX_2) \times sim(SlotY_1,SlotY_2)}
				\]
				\item[\small 4.4] Output all paths in $C$ sorted by their similarity to $p$
			\end{itemize}
	\end{itemize}
	\line(1,0){430}
	\end{table}			
	
	\vspace*{0.3cm}	
	\cite{Lin:01} show that the algorithm obtains significant results on 1GB of newspaper texts
	\footnote{AP Newswire, San Jose	Mercury and Wall Street Journal}. However, this method suffers
	from some limitations. The one issue is that the performance of the algorithm depends heavily
	on the root of extracted path. It tends to perform better for paths with verbal roots because
	verbs frequently have several modifiers whereas nouns tend to have no more than one and if a
	word has any fewer than two modifiers, no path can go through it as the root. In addition, the
	algorithm identifies only templates with pre-specified structures, accuracy seems limited and
	coverage is still limited to the scope of an available corpus. Another issue is that the 
	algorithm, despite the use of more informative distributional features, can generate incorrect
	or implausible paraphrase patterns such as \textit{"X eat Y" $\Leftrightarrow$ "X like Y"}.
	
	
	\vspace*{0.3cm}
	Unlike the DIRT approach, \cite{Szpektor:04} propose TEASE system which uses pairs of slot
	fillers that occur in the same sentence, called \textit{anchors}. For example, the anchor set
	\{Aspirin$\xleftarrow{subj}$, heart attack$\xleftarrow{obj}$\} can be extracted for the verb
	\textit{prevent} from the sentence "\textit{Aspirin prevents a first heart attack}". In addition,
	whereas \cite{Lin:01} extracts all possible dependency paths from an available corpus, 
	\cite{Szpektor:04} applied TEASE method to the largest available corpus - the Web. Figure 
	\ref{fig:tease} gives an overview of the TEASE system.
	
	\vspace*{0.3cm}	
	For each lexicon entry, denoted a \textit{pivot} such as \textit{"acquire", "fall to",
	"prevent", "arrest"}, the algorithm first creates a complete template, called the 
	\textit{pivot template}. These pivot templates always contain a subject and an object as
	variable slots.	For example, "X $\xleftarrow{subj}$ arrest $\xrightarrow{obj}$ Y" presents a
	pivot template for the lexicon entry \textit{'arrest'}. Then, a \textit{sample corpus} is
	constructed for the pivot template by utilizing a Web search engine to retrieve all sentences
	which instantiate all variable slots. In the bellow examples, the sentence (a) is acceptable
	but the sentence (b) is discarded because it lacks the subject of \textit{arrest}:
	\begin{itemize}
		\item[(a).] Police arrest 19-year old LulzSec hacker on Monday
		\item[(b).] The murder was arrested in the morning
	\end{itemize}
	Each phrase that instantiates variable slots is statistically tested to compute the strong
	collocation relationship with the pivot template. Phrases that occur frequently on the Web 
	are discarded, and N phrases with highest \textit{tf.idf} score \footnote{Here, \textit{tf.idf=
	$freq_S(X).log\left(\dfrac{N}{freq_W(X)}\right)$} \\ where $freq_S(X)$ is the number of 
	occurences in sample corpus containing X, N is the total number of Web documents, and 
	$freq_W(X)$ is the number of Web documents containing X} are selected. The pivot template and
	each of the associated phrases then are posed to Web search engine to get more retrieved
	sentences. Iteratively, the sample corpus can be extended and more candidate anchor sets can 
	be extracted from the corpus.
	
	\begin{figure}[h]
		\centering
		\includegraphics[scale=0.5]{tease.jpg}
		\caption{An overview of TEASE system}
		\label{fig:tease}
	\end{figure}	
	
	\vspace*{0.3cm}
	The next step, from sentences containing the anchor sets, extracts templates
	for which an entailment relation holds with the pivot. The retrieved sentences
	first are parsed with Minipar parser and the node presenting anchor slots are
	replace with variables (X, Y). The representations of the sentences are then
	used to learned the maximal most general template for the pivot. The basic idea
	of the learning algorithm is to construct a compact tree representation which
	contains all dependency relation encountered in the sample corpus. Figure
	\ref{fig:most general template} gives an illustration of the algorithm.
	\begin{figure}[h]
		\centering
		\begin{itemize}
			\item[(1)] Police takes John Lennon into custody
			\item[(2)] Officers take Whitney Houston into custody for drug charges			
		\end{itemize}
		\begin{tabular}{c}
			\hfil \xy
			\xytree[1]{
				& & \xynode{take(1)}
					\xyconnect [->]{1,-2}"|{\textit{\small subj}}"
					\xyconnect [->]{1,0}"|{\textit{\small obj}}"
					\xyconnect [->]{1,2}"|{\textit{\small into}}"
				\\
				\xynode{X(1)} & & \xynode{Y(1)} & & \xynode{custody(1)}
			}		
			\endxy		
		\end{tabular}		
		\begin{tabular}{c}
			\hfil \xy
			\xytree[1]{
				& & \xynode{take(2)}
					\xyconnect [->]{1,-2}"|{\textit{\small subj}}"
					\xyconnect [->]{1,0}"|{\textit{\small obj}}"
					\xyconnect [->]{1,2}"|{\textit{\small into}}"
				\\
				\xynode{X(2)} & & \xynode{Y(2)} & & \xynode{custody(2)}
				\xyconnect [->]{1,0}"|{\textit{\small for}}"
				\\
				& & & & \xynode{drug charges(2)}
			}		
			\endxy
		\end{tabular}
		\\\textbf{$\Rightarrow$}
		\begin{tabular}{c}
			\hfil \xy
			\xytree[1]{
				& & \xynode{\textbf{take(1,2)}}
					\xyconnect [->]{1,-2}"|{\textit{\small subj}}"
					\xyconnect [->]{1,0}"|{\textit{\small obj}}"
					\xyconnect [->]{1,2}"|{\textit{\small into}}"
				\\
				\xynode{\textbf{X(1,2)}} & & \xynode{\textbf{Y(1,2)}} & &
													 \xynode{\textbf{custody(1,2)}}
			}		
			\endxy		
		\end{tabular}	
		\caption{Example for the extraction of a minimal most general template}	
		\label{fig:most general template}
	\end{figure}
		
	The last phase of the TEASE algorithm ranks the template candidates. The target of ranking
	is to indicate which of the candidates is more plausible to be correct: the higher the
	rank of a template is, the more confident the algorithm is that the template
	participates in an entailment relation with the pivot template.
	\cite{Szpektor:04} proposed several ranking methods. The baseline ranking
	method is based on the number of different anchor-sets and sentences supporting
	the template. In order to improve the ranking of templates, two re-ranking
	methods are proposed called random walk stationary probability and template
	similarity.
	
	\vspace*{0.3cm} 
	Whereas DIRT algorithm does not require repeated accurences of specific combinations of multiple
	context words, but rather examines each anchor slot independently of the other slot, the number
	of pairs of paraphrase candidates extracted is much higher than in other algorithms. However, 
	relying only on word distribution similarity and not on specific events or facts, as identified
	by complete anchor-sets as TEASE algorithm does, may decrease the precision of correct pairs
	compared to sentence-alignment methods. Both DIRT and TEASE algorithms get average yield of 
	40\%. It is interesting to see that some templates can be learned by TEASE but missed by DIRT
	and some can be extracted by DIRT but not by TEASE. Therefore, two algorithms complement each
	other in terms of the entailment relations they learn. Another aspect comparing DIRT and TEASE
	algorithm is that whereas DIRT has exhaustively processed a local corpus, thus reaching the 
	limit of learning from that corpus, TEASE uses the Web as its corpus and can adjust its
	parameter to scan more data in order to learn more templates. However, both algorithms
	do not address the problem of determining the direction of the inference rules which is
	then solved partially by incorporating knowledge about the selectional preferences of 
	paraphrase patterns \cite{Pantel:07,Bhagat:07}.
	
	
	
\section{Using Relational Selectional Preferences to Improve Inference Resources}
\label{sec:rsp}
	
	\cite{Lin:01,Szpektor:04} described automatic methods for building inference resources. However,
	using this resources in applications has been hindered by the large amount of incorrect
	inferences they generate. For example, given a inference rule \textit{"X is charged by Y" 
	$\Rightarrow$ "X announced the arrest of Y"}, from the sentence (a) we can infer that 
	\textit{"federal prosecutors"} announced the arrest of \textit{Terry Nichols} but we cannot
	infer that \textit{"CCM telemarkets"} announced the arrest of \textit{accounts} from the 
	sentence (b). 
	\begin{itemize}
		\item[(a)] Terry Nichols was charged by federal prosecutors for murder and conspiracy in 
		the Oklahoma City bombing.
		\item[(b)] Fraud was suspected when accounts were charged by CCM telemarketers without
		obtaining consumer authorization.
	\end{itemize}	
	Therefore, \cite{Pantel:07,Bhagat:07} present algorithms which aim to filtering out incorrect
	inference rules from the resources. Whereas \cite{Pantel:07} learn the admissible argument
	values for which an inference rule holds called Inferential Selectional Preferences, 
	\cite{Bhagat:07} discovers the directionality of inference rules.
	
	\vspace*{0.3cm}
	The aim of the paper presented by \cite{Pantel:07} is to learn inferential selectional
	preferences for filtering inference rules. Formally, given an inference rule
	$p_i \Rightarrow p_j$ and the instance $\langle x,p_i,y \rangle$, the algorithm
	can determine whether $\langle x,p_j,y \rangle$ is valid or not. In the paper, 
	\cite{Pantel:07} propose two relational models and several filtering algorithms
	using these models. Given a large corpus, they first find the occurences of each semantic
	relation $p$. For each instance $\langle x,p,y \rangle$, they retrieve the sets $C(x)$
	and $C(y)$ of the semantic classes that $x$ and $y$ belong to and calculate the frequencies
	of the triple $\langle c(x),p,c(y) \rangle$. Each triple $\langle c(x),p,c(y) \rangle$
	is considered as a candidate selectional preference for the semantic relation $p$. These
	candidates then are ranked according to the strength of association between two semantic
	classes, $c_x$ and $c_y$, given the relation $p$.
	
	\vspace*{0.3cm}
	The first model called Joint Relational Model (JRM) considers the arguments of the binary
	semantic relations jointly. The ranking function is defined as follows:
	\[
		pmi(c_x \vert p; c_y \vert p) = log \dfrac{P(c_x,c_y \vert p)}
							{P(c_x \vert p)P(c_y \vert p)}
	\]
	where 
	\[
		P(c_x \vert p) = \dfrac{\vert c_x,p,* \vert}{\vert *,p,* \vert}
		~~~~~
		P(c_y \vert p) = \dfrac{\vert *,p,c_y \vert}{\vert *,p,* \vert}
		~~~~~
		P(c_x,c_y \vert p) = \dfrac{\vert c_x,p,c_y \vert}{\vert *,p,* \vert}		
	\]
	
	However, because of sparse data, it is difficult to have both classes co-occuring in the 
	corpus even
	though they would form a valid relational selectional preference. In order to alleviate
	this problem, \cite{Pantel:07} proposed the second model called Independent Relational
	Model (IRM). Basically, this model is similar to the JRM except that all tuples 
	$\langle c_x,p,* \rangle$ and $\langle *,p,c_y \rangle$ are considered as candidates
	selectional preferences for $p$ instead of $\langle c_x,p,c_y \rangle$.
	
	\vspace*{0.3cm}
	For each inference rule $p_i \Rightarrow p_j$, the set of candidate inferential selectional
	preferences (ISP) is defined as the intersection of the RSPs of $p_i$ with the RSPs of $p_j$.
	The Joint Inferential Model (JIM) and the Independent Inferential Model (IIM) is proposed
	based on the JRM and IRM respectively. The score of the ISPs can be either defined as taking 
	the minimum, maximum or average of the RSPs. Based on these models, \cite{Pantel:07} present
	several filtering algorithms which range from the least to the most permissive:
	\begin{itemize}
		\item \textbf{ISP.JIM}, accepts the inference $\langle x,p_j,y \rangle$ if the ISP
		$\langle c_x,p_j,c_y \rangle$ is admitted by JIM for some $c_x$ and $c_y$
		\item \textbf{ISP.IIM.$\wedge$}, accepts the inference $\langle x,p_j,y \rangle$ if
		the ISPs $\langle c_x,p_j,* \rangle$ AND $\langle *,p_j,c_y \rangle$ are admitted
		by the IIM for some $c_x$ and $c_y$
		\item \textbf{ISP.IIM.$\vee$}, accepts the inference $\langle x,p_j,y \rangle$ if
		the ISPs $\langle c_x,p_j,* \rangle$ OR $\langle *,p_j,c_y \rangle$ are admitted
		by the IIM for some $c_x$ and $c_y$		
	\end{itemize}	 
	Each filtering algorithm can be tuned to be more or less strict by setting an acceptance
	threshold on the ranking scores or by selecting only the top $\tau$ percent highest ranking
	SPs.
	
	\vspace*{0.3cm}
	Similarly to \cite{Pantel:07}, \cite{Bhagat:07} aim to filter out the incorrect inference
	rules from inference resources by identifying the directionality of the rules. Formally, given
	an symmetric inference rule $p_i \Leftrightarrow p_j$, the algorithm LEDIR can infer which one
	of following is more appropriate:
	\begin{itemize}
		\item[1.] $p_i \Leftrightarrow p_j$
		\item[2.] $p_i \Rightarrow p_j$
		\item[3.] $p_i \Leftarrow p_j$
		\item[4.] \textit{No plausible inference}						
	\end{itemize}
	For example, consider the inference rule \textit{"X eat Y" $\Leftrightarrow$ "X like Y"}, it
	is most plausible to conclude \textit{"X eat Y" $\Rightarrow$ "X like Y"}. Basically, the 
	LEDIR algorithm uses selectional preferences along the lines with \cite{Pantel:07} to determine
	the plausibility and directionality of inference rules.
	
	\vspace*{0.3cm}
	The plausibility of an inference is determined based on the overlap coefficient between
	the selectional preferences of the two paths. Given a candidate inference rule $p_i
	\Leftrightarrow p_j$, let $\langle C(x),p_i,C(y) \rangle$ and $\langle C(x),p_j,C(y) \rangle$
	denote the relational selectional preferences for $p_i$ and $p_j$ respectively in which
	$C(x)$ and $C(y)$ are the set of semantic classes of words that can occur in the position
	$x$ and $y$ in the relation $p$ of the form $\langle x,p,y \rangle$. The overlap
	coefficient between the selectional preferences of $p_i$ and $p_j$ is calculated as:
	\[
	sim(p_i,p_j) = \dfrac{\vert \langle C_x,p_i,C_y \rangle \cap \langle C_x,p_j,C_y \rangle \vert}
			{min(\vert \langle C_x,p_i,C_y \rangle \vert, \vert \langle C_x,p_j,C_y \rangle \vert) }
	\]
	If the overlap coefficient value \footnote{Here, the value is calculated through two models
	Joint Relational Model and Independent Relational Model as 
	described in \cite{Pantel:07}} is larger than a empirically determined threshold $\alpha$,
	the inference is plausible, otherwise it is filtered out as being implausible.
	
	\vspace*{0.3cm}
	For all plausible determined inference rules, \cite{Bhagat:07} determine the directionality of 
	the rule. They assume and propose an extension to the distributional hypothesis called
	\textbf{Directionality Hypothesis}: \textit{If two binary semantic relations tend to occur
	in similar contexts and the first one occurs in significantly more contexts than the second,
	then the second most likely implies the first and not vice versa.} Intuitively, consider
	the inference rule \textit{"X eat Y" $\Leftrightarrow$ "X like Y"}, there are many more things
	that someone might like than those that someone might eat. Thus, by applying the hypothesis
	one can infer that \textit{"X eat Y" $\Rightarrow$ "X like Y"}. Technically, the directionality
	of the plausible inference rule is determined by comparing the number of selectional
	preferences $\vert C_x,p_i,C_y \vert$ for $p_i$ and $\vert C_x,p_j,C_y \vert$ for $p_j$:
	\begin{itemize}
		\item[] \textit{If}~~~~~~~~~~~~~~~~~ $\dfrac{\vert C_x,p_i,C_y \vert}{\vert C_x,p_j,C_y
		\vert} \geq \beta$ ~~~~~~~~~ we conclude $p_i \Leftarrow p_j$
		\item[] \textit{else if} ~~~~~~~~~ $\dfrac{\vert C_x,p_i,C_y \vert}{\vert C_x,p_j,C_y \vert}
		\leq \dfrac{1}{\beta}$ ~~~~~~~~~ we conclude $p_i \Rightarrow p_j$
		\item[] \textit{else} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ we conclude 
		$p_i \Leftrightarrow p_j$
	\end{itemize}
	Two factors that have big impact on the performance of LEDIR algorithm are the values of
	parameters $\alpha$ and $\beta$ which decide the plausibility and directionality of an 
	inference rule, respectively. For plausible inference rules, too low a value of $\beta$
	means that the algorithm tends to predict most rules as undirectional and too
	high a value means that the algorithm tends to predict most rules as
	bidirectional. Through experiments \cite{Bhagat:07} show that the performance
	of the systems reach peak values with $\alpha = 0.15$ and $\beta = 3$.
	
	\vspace*{0.3cm}
	Both \cite{Pantel:07} and \cite{Bhagat:07} perform their algorithms on
	inference rules of the DIRT system. \cite{Pantel:07} randomly selected 100
	rules $p_i \Leftrightarrow p_j$ from DIRT resource, and 10 distinct instances
	extracted from the Aquaint 1999 AP newswire corpus for $p_i$, resulting in a
	total of $1000$ instances. Then, in order to form the gold standard, two human
	judges were asked to tag each instance $\langle x,p_j,y \rangle$ as correct or
	incorrect. Otherwise, \cite{Bhagat:07} create the gold standard by annotating
	157 inference rules with respect to whether they are plausible or not, and if
	applicable, their directionalities. Even though both systems show a promising
	results in filtering incorrect inference rules (50\% for ISP algorithm and 48\%
	for LEDIR algorithm), they do not address the issue with antonymy	relations
	like \textit{"X love Y $\Leftrightarrow$ X hate Y"}. Hence, other ideas need to
	be investigated. Moreover, \cite{Bhagat:07} show that for testing only
	directionality, the system obtains an accuracy of 63.63\%. This indicates that
	the system performs quite well on the task of determining the directionlity of
	a rule and the problem of filtering out incorrect rules is significantly more
	challengeable.
	
	\vspace*{0.3cm}
	To sum up, \cite{Pantel:07} and \cite{Bhagat:07} presented a collection of methods to filter
	out incorrect inference rules from inference resources and determine the directionalities of
	the inference rules. They make use of the selectional preferences of the predicates involved
	in the rules. Averagely, they reach a precision of 50\% for the task of filtering implausible
	rules and more than 60\% for the task of determining the directionality. Hence, the other
	approaches need to investigated to improve the results of these tasks.
	 
	 
	 
\section{Evaluation of Paraphase Acquisition}
\label{sec:eval}

	Whereas other language processing tasks such as machine translation and
	document summarization usually have multiple annual community-wide evaluation
	using some standard test sets and manual as well as automated metrics, the task
	of automated paraphrasing does not. One possible reason for this disparity can
	be that paraphrasing is not an real application itself. However, for other
	similar tasks such as dependency parsing and word sense disambiguation which
	are also not applications, the similar evaluation has been also in existence.
	\cite{Madnani:10} suggest that the primary reason is that paraphrasing has been
	studied in an extremely fragmented fashion. They exist in different forms and
	different names in the context of different applications such as synonymous
	collocation extraction, query expansion. Consequently, this does not allow
	researchers in one community to share their lessons learned with those from
	other communities.

	\vspace*{0.3cm}	
	However, most recent work does include direct evaluation of the paraphrasing
	itself. The first evaluation method is that the original phrase and its
	paraphrase are presented to several human judges along with the contexts in
	which the phrase accurs in the original sentence. Then, the human judges would
	be asked to determine whether the two phrases are indeed paraphrastic
	\cite{Ibrahim:03,Pang:03}. A more direct method is to subtitute the original
	phrase by its paraphrase in the original sentence and then present both
	sentences to human judges who then are asked to judge their semantic
	equivalence and the grammaticality of the new sentence
	\cite{Bannard:05,Callison-Burch:08b}. As a similar form of evaluation for
	textual entailment rules, \cite{Szpektor:07} propose a method called
	instance-based evaluation wherein not only entailment rule but also a sample of
	sentences that match its left-hand side are presented to human judges who then
	are asked to assess whether the rule holds under each specific instance.
	
	\vspace*{0.3cm}
	Moreover, in recent work, the evaluation methods that use automatic measures
	are studied. The traditional automatic evaluation measures of precision and
	recall are not particularly suited to this task because they require a list of
	reference paraphrases has to be constructed in order for these measures can be
	computed. But it is extremely unlikely that such list will be exhaustive, thus
	the precision and recall measurements will not be accurate. Most recently,
	\cite{Callison-Burch:08a} discuss ParaMetric which is another automatic measure
	that may be used to evaluate paraphrase extraction methods. The detailed
	description of the approach is beyond the scope of this paper. In short, it
	uses the set of alignments produced by the paraphrase method for the sentence
	pairs in the corpus to compute precision and recall. Thus, it cannot be used
	for the methods that do not produce alignments between sentence pairs.
	
	\vspace*{0.3cm}
	In this paper, we propose a new method which utilizes Web snippets to assess
	the correctness of instances that can be instantiated in the relation $p$. This
	method is likely similar to the instance-based method but we use Web snippets
	to automatically determine whether the instances are correct and whether
	entailment holds. Therefore, human judges are not required.
	
	
\subsection{Instance-based Evaluation Methodology}
\label{subsec:instance}

	The basic idea of substitution-based evaluation being that items deemed to be paraphrases may 
	behave as such only in some contexts and not others. Following that line, an entailment rule
	\textit{"L $\rightarrow$ R"} can be regarded as \textit{correct} if in all relevant contexts
	in which the instantiated template $L$ is inferred from a given text, the instantiated
	template $R$ is also inferred from the text. That is, in order to assess if a rule is correct
	we should judge whether $R$ is typically entailed from those sentences that entail $L$ within
	relevant contexts for the rule. Therefore, \cite{Szpektor:07} propose a new evaluation scheme
	for entailment rules called instance-based approach wherein human judges are presented not 
	only witha rule but rather with a sample of examples of the rule's usage.
	
	\vspace*{0.3cm}
	Given a rule $"L \rightarrow R"$, the first step is to automatically retrieve
	sentences from a given corpus that match $L$, thus likely to entail it. For
	each retrieved sentence, the arguments what instantiate the left template $L$
	and the right template $R$ are automatically extracted termed \textit{left
	phrase} and \textit{right phrase}, respectively. For example, we have the rule
	\textit{"X lose Y $\rightarrow$ X surrender Y"} and a retrieved sentence
	"\textbf{Bread} has recently lost \textbf{its subsidy}", the left phrase
	\textit{Bread lose its subsidy} and the right phrase \textit{Bread surrender
	its subsidy} are extracted. Technically, in the paper, \cite{Szpektor:07}
	describe a simple method to find sentences that match $L$ by finding a
	sub-tree of the sentence parse that is identical to the template structure.
	Thus, this matching method suffer from some problems of incorrect sentence
	analysis or semantic aspects like negation, modality and conditionals.
	
	\vspace*{0.3cm}
	For each example generated for a rule, the judges are presented with the given sentence and
	the left phrase and the right phrase. In order to assess whether entailment holds in this 
	example, three questions are given:
	\begin{itemize}
		\item[Q$_{le}$] Is the left phrase entailed from the sentence? A positive/negative answer
		corresponds to a \textbf{'Left entailed/not entailed'} judgement.
		\item[Q$_{re}$] Is the right phrase entailed from the sentence? A positive/negative answer
		corresponds to a \textbf{'Entailment holds/No entailment'} judgement.
		\item[Q$_{rc}$] Is the right phrase is likely phrase in English? A positive/negative answer
		corresponds to a \textbf{'Relevant/Irrelevant context'} evaluation.		
	\end{itemize}	
	The first question identifies sentences that do not entail the left phrase, thus should be
	ignored when evaluating the rule's correctness. The second question assesss whether the rule
	is valid or not for the current example. For the third question, if the right phrase is not
	likely grammatical in English, the given context is probably irrelevant for the rule because
	it seems inherently incorrect to infer an implausible phrase.
	
	\vspace*{0.3cm}
	For each example, the three questions described above are presented to the judges in the following
	order: (1) Q$_{le}$ (2) Q$_{rc}$ (3) Q$_{re}$. If the answer to a certain question is negative
	then next questions do not need to be judged. That is, if the left phrase is not entailed then
	the sentence is ignored; and if the context is irrelevant then the right phrase can not be
	entailed from the sentence, thus the answer to the third one is already known as negative.
	The last step is to calculate measures to determine whether the rule is correct. 
	\cite{Szpektor:07} described two measures which can be viewed as upper and lower bounds for
	expected precision of the rule in actual systems:
	\begin{itemize}
		\item \textbf{Upper bound precision: } $\dfrac{\#Entailment~holds}{\#Relevant~context}$
		\item \textbf{Lower bound precision: } $\dfrac{\#Entailment~holds}{\#Left~entailed}$		
	\end{itemize}
	A rule is considered as a correct rule only if its precision is at least 80\% which seems
	sensible for typical applied settings.
	
	\vspace*{0.3cm}
	\cite{Szpektor:07} applied the instance-based valuation approach to valuate two
	state-of-the-art unsupervised acquisition algorithms, DIRT \cite{Lin:01} and
	TEASE \cite{Szpektor:04} as described above. Table \ref{tab:result} presents
	the evaluation results with 646 inference rules and with 8945 examples to be
	judged wherein P is the micro average Precision, the percentage of correct
	rules out of all learned rules and Y is average Yield, the average number of
	correct rules learned for each input template. The major finding from the
	results presented in Table \ref{tab:result} is that the overall quality of DIRT
	and TEASE is very similar. Under the specific DIRT cutoff threshold chosen,
	DIRT exhibits somewhat higher Precision whereas TEASE has somewhat higher
	Yield. In addition, \cite{Szpektor:07} report that only about 15\% of the
	correct templates were learned by both algorithms, which implies that the two
	algorithms largely complement each other in terms of coverage. The reason can
	be that DIRT is focused on the domain of the local corpus used, whereas TEASE
	learns from the Web, extracting rules from multiple domains.
		
	\begin{table}[h]
		\centering
		\caption{Average precision and yield at the rule and template levels}		
		\begin{tabular}{|l |c |c |c |c|}
			\multicolumn{5}{l}{} \\		
			\hline
			&	\multicolumn{2}{|c|}{DIRT} & \multicolumn{2}{|c|}{TEASE} \\
			&	P 	& 	Y	& 	P	&	Y \\
			\hline
			\multicolumn{5}{l}{Rules} \\
			\hline
			Upper Bound & 30.5\% & 33.5 & 28.4\% & 40.3 \\			
			\hline
			Lower Bound & 18.6\% & 20.4 & 17\% & 24.1 \\
			\hline
			\multicolumn{5}{l}{Templates} \\		
			\hline	
			Upper Bound & 44\% & 22.6 & 38\% & 26.9 \\			
			\hline
			Lower Bound & 27.3\% & 14.1 & 23.6\% & 16.8 \\
			\hline			
		\end{tabular}
		\label{tab:result}
	\end{table}	
	
	
	
\section{Assessment of Paraphrase Instances}
\label{sec:assessment}


	In general, a grammatical phrase $s$ is paraphrasable with another phrase $t$ if and only
	if $t$ satisfies following conditions:
	\begin{itemize}
		\item $t$ is grammatical
		\item $t$ holds if $s$ holds
		\item $t$ is substituable for $s$ in some context
	\end{itemize}
	In following examples, the phrase \textit{'lower the risk of'} is considered as a paraphrase
	of the phrase \textit{'prevent'} and \textit{announce the arrest of} is that of \textit{'is
	charged by'}:
	\begin{itemize}
		\item[(a)] X is charged by Y $\Rightarrow$ Y announced the arrest of X
		\item[(b)] X prevent Y $\Rightarrow$ X lower the risk of Y
	\end{itemize}
	In this paper, in order to determine whether the conditions are hold, we assume that if there
	are a number of instances that can be instantiated in the relation of the phrase $s$ and also
	can subtitute variables in that of the phrase $t$, then $t$ and $s$ are paraphrastic.
	Formally, given an inference rule $p_i \Rightarrow p_j$, and a pair
	of anchors $x$ and $y$ which can be subtituted for the slot $X$ and slot $Y$ in the relation 
	$p_i$, respectively. The method aims to determine whether the instance $\langle
	x,p_j,y	\rangle$ are correct and then whether the inference rule holds.	
	In order to complete this task, we propose two models called Joint Instance Model and
	Independent Instance Model. \\ [0.3cm]
%
	\textbf{Joint Instance Model}: Each triple $\langle x,p_j,y \rangle$ is a candidate
	instance for $p_j$. Let $\vert \langle x,p_j,y \rangle \vert$ be the number times the
	candidate occurs in the corpus. If the number is significantly large, the instances $x$
	and $y$ are likely to be correct. And if the number of correct instances are
	large enough, then the inference rule $p_i \Rightarrow p_j$ is considerably
	to be hold.\\[0.3cm]
%
	\textbf{Independent Instance Model}: Because of sparse data, the Joint Instance Model
	can miss some instances that are relavant to the relation. Thus, to alleviate this problem,
	we propose a second model that less strict by considering the arguments independently.
	All tuples $\langle x, p_j, * \rangle$ and $\langle *, p_j, y \rangle$ are candidate
	instances for $p_j$. Let $\vert \langle x, p_j, * \rangle \vert$ be the number examples
	$\langle x, p_j, \hat{y} \rangle$ that occurs in the corpus wherein $\hat{y}$
	is in the same semantic class with $y$, and similarly $\vert \langle *, p_j, y \rangle 
	\vert$ be the number examples
	$\langle \hat{x}, p_j, y \rangle$ that occurs in the corpus wherein $\hat{x}$
	is in the same semantic class with $x$. As Joint Instance Model, if these numbers are 
	large, the instances $x$ and $y$ are likely to be correct and the number of
	correct instances are large enough, we can conclude that $p_i$ infers $p_j$
	\\[0.3cm]
%	
	Given an instance $\langle x, p, y \rangle$, we compute the score of correctness as 
	following procedure:
	\begin{itemize}
		\item [\textbf{1.}] Retrieve Web snippets for $\langle x,p_j,y \rangle$,
		$\langle x,p_j,* \rangle$ and $\langle *,p_j,y \rangle$
		\item [\textbf{2.}] Extract features 
		\item [\textbf{3.}] Compute the score of correctness 
	\end{itemize}
	The rest of this section elaborates on each step in turn, taking inference rules from
	DIRT database as examples.
	
\subsection{Retrieving Web snippets}

	In general, phrases appear less frequently than single words. Thus, this raises
	the sparse problem to get instances for a relation. One possible way to
	overcome the problem is to take back-off statistics assuming the independence
	between constituent words. However, this approach has a risk of involving
	noises due to ambiguity of words. We study another approach which utilizes the
	Web as large corpus for finding examples. We retrieve the Web snippets via
	Google search engine with a python script. These snippets then are used to
	calculate the features of an instance.

\subsection{Extracting features}

	To compute the score of correctness, we consider the following set of features as described
	in \cite{Fujita:08}:
	\begin{itemize}
		\item[]\textbf{MOD1:} If there is a number occurences of the instance $\langle x,p_j,y
		\rangle$ found on the Web snippets, the instance $x$ and $y$ are likely
		to be correct
		\item[]\textbf{MOD2:} If number examples $\langle x,p_j,* \rangle$ are found  on the
		Web snippets, the instance $x$ is likely to be correct for the slot $X$
		\item[]\textbf{MOD3:} If number examples $\langle *,p_j,y \rangle$ are found  on the
		Web snippets, the instance $y$ is likely to be correct for the slot $Y$		
	\end{itemize}
	To extract these features, we just count from the sentences obtained from
	previous step. In the scope of this paper, we use simple matching method to
	get this number in which the instance $x$ and $y$ must occur in the window of
	three words to the phrase $p_j$. Technically, we can use more complicated
	method such as using dependency parser to parse sentences and identify whether
	$x$ and $y$ are modifier and modifiee of $p_j$.
	
\subsection{Computing the score of correctness}
	
	We define a function as score of correctness which combines the extracted features.
	\[
		score = \alpha * MOD1 + \beta * MOD2 + \gamma * MOD3
	\]
	wherein $\alpha, \beta, \gamma$ are weight of features. In this paper, we set $\alpha=1$,
	$\beta=0.5$ and $\gamma = 0.5$. That means if $x$ and $y$ are found concurrently, then
	they are likely correct rather than they are found independently. Finally, a pair instance
	$(x,y)$ is considered to be correct if its score is larger than a threshold $\tau$.
	
	
	
\subsection{Experiments}
\subsubsection{Experimental Setting}
	
	To evaluate the idea of assessing the correctness of a example along with a
	rule, we pick two most popular verbs with template from RTE-2 dataset that are 
	{\em X work for Y} and {\em X be attacked by Y}. For each verb entry, we get
	the list of related candidate entailment templates from DIRT knowledge-base
	\footnote{http://demo.patrickpantel.com/demos/lexsem/paraphrase.htm}. We then
	convert from DIRT format into more readable format, such as
	{\em N:subj:V$<$work$>$V:for:N} is converted into {\em X work for Y}.
	We use such top 10 candidates as described in Table \ref{tab:templates}.
	Along with two pivot templates, we also pick examples from RTE-2 dataset which
	instantiate these templates. More specifically, 18 examples are extracted for
	the first pivot template, and 11 examples for the second one. We extract
	examples from RTE-2 dataset because almost sentences in this dataset are
	simple, thus they are easily to exact and guaruatee that the left-hand-side
	phrase is entailed with these examples. After that, these examples are assessed
	as being whether correct when instantiating the right-hand-side phrase.

	\begin{table}[h]
		\centering
		\caption{Sample templates and examples in test set}
		\begin{tabular}{l l l}
			& 							\\
			\hline
			Pivot template & Entailment Templates & Examples\\
			\hline
			\multirow{5}{*}{X work for Y} & X BE hire Y & (Regina Shueller, Italy's La
			Repubblica newspaper)\\ 
										  & X BE hire by Y & (Phil Wittmann, Tom Benson)\\
									      & X work at Y & (Happy Madison, the company owned by Sandler)\\
									      & X work with Y & (Christopher Hill, the US) \\
									      &	X BE employ by Y & (Steve Jobs, Apple)\\
			\hline									 
			\multirow{5}{*}{X be attacked by Y} & X on attack by Y & (A patrol car, the
			San Carlos Battalion) \\ 
												& X BE attack Y & (Fenastras, FMLN) \\
												& X BE ambush Y & (Steve Jobs, Sculley and other Apple executives)\\
												& X at stone throw Y & (Power lines, FMLN)\\
												& X BE ransack Y & (the UN Security Council, Ahmadinejad) \\
			\hline
		\end{tabular}
		\label{tab:templates}
	\end{table}
	
	\vspace*{0.3cm}
	Each example and entailment rule assessed by the method is then presented to
	human judge for judging the quality of the system. The average precision
	score which is the percentage of examples that human judges agree with the
	system over all the examples is used. In this paper, because we use small
	set of examples extracted from RTE-2 dataset, we do not evaluate entailment
	rules.
 
	
	
\subsubsection{Results}

	We evaluate the quality of the algorithm using the precision score, the
	percentage of examples that human judges agree on overall correct examples
	returned by the algorithm. We do not evaluate the examples which are considered
	as being incorrect by the system. The incorrect examples mean that the
	right-hand-side phrases instantiated by these examples do not occur on the Web
	snippets. But it can not indicate such examples are irrelevant to instantiate
	the entailment templates. For examples, for the entailment rules {\em "X be
	attacked by Y" $\Rightarrow$ $p_j$}, almost all examples extracted from RTE-2
	dataset are considered as being incorrect by the algorithm but some of them are
	obviously plausible. Table \ref{tab:percentage of correct} presents the
	percentage of instances returned as being correct by the algorithm and the
	percentage of our agreement on these examples.
	
	\begin{table}[h]
		\centering
		\caption{Percentage of examples returned as being correct by the algorithm
		and our agreement}
		\begin{tabular}{l l l l}
			& \\
			Pivot template & Entailment templates & P1 & P2 \\			
			\hline
			\multirow{10}{*}{X work for Y} & X BE hire Y & 6/18 & 6/6 \\
										  & X work with Y & 12/18 & 8/12 \\
										  & X BE employ by Y & 7/18 & 7/7 \\
										  & X work at Y & 11/18 & 9/11 \\
										  & X do work for Y & 2/18 & 2/2 \\
										  & X go work for Y & 4/18 & 4/4 \\
										  & X BE sentence to Y & 4/18 & 0/4\\
										  & X tell Y & 14/18 & 0/4 \\
										  & X jon Y & 12/18 & 8/12\\
										  & X work in Y & 9/18 & 6/9
		\end{tabular}
		\label{tab:percentage of correct}
	\end{table}
	
	Table \ref{tab:percentage of correct} shows that only small number of 18
	instances are returned by the algorithm as being correct.	
\subsubsection{Discussions}	
	
\section{Conclusions}
\label{sec:conclusion}



\section*{Acknowledgments}

\begin{thebibliography}{}

\bibitem[\protect\citename{Ibrahim \bgroup et al.\egroup }2003]{Ibrahim:03}
Ali Ibrahim, Boris Katz, and Jimmy Lin.
\newblock 2003.
\newblock Extracting structural paraphrases from aligned monolingual corpora.
\newblock In {\em Proceedings of the second International Workshop on
Paraphrasing}. pages 57-64.

\bibitem[\protect\citename{Fujita \bgroup et al.\egroup }2008]{Fujita:08}
Atsushi Fujita and Satoshi Sato.
\newblock 2008.
\newblock Computing paraphrasability of syntactic variants using Web snippets.
\newblock In {\em Proceedings of IJCNLP}, Hyderabad.

\bibitem[\protect\citename{Pang \bgroup et al.\egroup }2003]{Pang:03}
Bo Pang, Kevin Knight and Daniel Marcu.
\newblock 2003.
\newblock Syntax-based alignment of multiple translations: Extracting paraphrases
and generating new sentences.
\newblock In {\em Proceedings of HLT-NAACL}, pages 102-109.

\bibitem[\protect\citename{Bannard and Callison-Burch}2005]{Bannard:05}
Collin Bannard and Chris Callison-Baurch.
\newblock 2005.
\newblock Generating phrasal and sentential paraphrases: A survey of data-driven
methods. 
\newblock In {\em Proceedings of the 43rd Annual Meeting on Association for
Computational Linguistics.}

\bibitem[\protect\citename{Callison-Burch \bgroup et al.\egroup
}2006]{Callison-Burch:06} 
Chris Callison Burch, Trevor Cohn, and Mirella Lapata.
\newblock 2006.
\newblock Annotation guidelines for paraphrase alignment.
\newblock Technical report, University of Edinburgh.

\bibitem[\protect\citename{Callison-Burch}2008]{Callison-Burch:08b}
Chris Callison-Baurch.
\newblock 2008.
\newblock Syntactic constraints on paraphrases extracted from parallel corpora.
\newblock In {\em Proceedings of Empirical Methods in Natural Language
Processing}, pages 196-205.

\bibitem[\protect\citename{Callison-Burch \bgroup et al.\egroup
}2008]{Callison-Burch:08a}
Chris Callison Burch, Trevor Cohn, and Mirella Lapata. 
\newblock 2008.
\newblock ParaMetric: An Automatic Evaluation Metric for Paraphrasing,
\newblock In {\em Proceedings of the 22nd International Conference on Computational
Linguistics}, pages 97-104.

\bibitem[\protect\citename{Lin}1998]{Lin:98}
Dekang Lin.
\newblock 1998.
\newblock Dependency-based evaluation of minipar.
\newblock In {\em roceedings of the Workshop on the Evaluation of Parsing Systems at LREC},
Granada.

\bibitem[\protect\citename{Lin and Pantel}2001]{Lin:01}
Dekang Lin and Patrick Pantel.
\newblock 2001.
\newblock DIRT - Discovery of Inference Rules from Text.
\newblock In {\em Proceedings of ACM Conference on Knowledge Discovery and Data Mining (KDD-01)}
. pp. 317-322. San Francisco, CA.

\bibitem[\protect\citename{Metzler \bgroup et al.\egroup }2007]{Metzler:07}
Donald Metzler, Susan Dumais, Christopher Meek.
\newblock 2007.
\newblock Similarity measures for short segments of text.
\newblock In {\em Proceedings of the 29th European Conference on IR Research.}


\bibitem[\protect\citename{Szpektor \bgroup et al.\egroup }2004]{Szpektor:04}
Idan Szpektor, Hristo Tanev, Ido Dagan, and Bonaventura Coppola.
\newblock 2004.
\newblock Scaling Web-based acquisition of entailment relations.
\newblock In {\em Proceedings of the Conference on EMNLP}, Barcelona, Spain.

\bibitem[\protect\citename{Szpektor \bgroup et al.\egroup }2007]{Szpektor:07}
Idan Szpektor, Eyal Shnarch, and Ido Dagan.
\newblock 2007.
\newblock Instance-based evaluation of entailment rule acquisition.
\newblock In {\em Proceedings of the 45th Annual Meeting of the Association for Computational
Linguistics}, Prague, Czech Republic.
	
\bibitem[\protect\citename{Androutsopoulos and Malakasiotis}2010]{Andr:10}
Ion Androutsopoulos and Prodromos Malakasiotis.
\newblock 2010.
\newblock A survey of paraphrasing and textual entrailment methods.
\newblock {\em Journal of Artificial Intelligence Research}, 38:135-187.

\bibitem[\protect\citename{Madnani and Dorr}2010]{Madnani:10}
Nitin Madnani and Bonnie J. Dorr.
\newblock 2010.
\newblock Generating phrasal and sentential paraphrases: A survey of data-driven methods.
\newblock {\em Computational Linguistics}, 36:341-387.

\bibitem[\protect\citename{Pantel \bgroup et al.\egroup }2007]{Pantel:07}
Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard Hovy
\newblock 2007.
\newblock ISP: Learning inferential selectional preferences.
\newblock In {\em Proceedings of NAACL HLT 2007}, pages 564-571.

\bibitem[\protect\citename{Bhagat \bgroup et al.\egroup }2007]{Bhagat:07}
Rahul Bhagat, Patrick Pantel, and Eduard Hovy.
\newblock 2007.
\newblock LEDIR: An unsupervised algorithm for learning directionality of inference rules.
\newblock In {\em Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language
Processing and Computational Linguistics}, pages 161-170.

\bibitem[\protect\citename{Barzilay \bgroup et al.\egroup }1999]{Barzilay:99}
Regina Barzilay, Kathleen R. McKeown, Michael Elhadad.
\newblock 1999.
\newblock Information fusion in the context of multi-document summarization.
\newblock In {\em Proceedings of the 37th annual meeting of the Association for
Computational Linguistics}.

\bibitem[\protect\citename{Harabagiu and Hickl}2006]{Harabagiu:06}
Sanda Harabagiu, Andrew Hickl.
\newblock 2006.
\newblock Methods for using textual entailment in open-domain question answering.
\newblock {\em the 44th annual meeting of the Association for Computational
Linguistics}


\end{thebibliography}

\end{document}
