
\section{Methodology}

Dependency tree representation of sentences allows us to find syntactic dependency relations of words and emphasizes informative parts in a sentence. With these properties, this model is more powerful than bag-of-words representation. Considering this advantageous side, we utilized two dependency tree kernels \cite{culotta2004dependency,choi2013social} to measure similarity of nodes in a sentence graph. In addition to these kernels, we designed a new sentence similarity function based on typed dependency grammars. Next section describes these dependency tree based methods in detail.


\subsection{The Dependency Tree Kernel }

The first dependency tree kernel used in this study was proposed by Culotta and Sorensen \shortcite{culotta2004dependency}. It is based on \cite{zelenko2003kernel}. This dependency tree kernel compares two dependency trees starting from root nodes and continue to compare subtrees in a recursive manner. Each word in a sentence is represented with a node in a dependency tree and each node is described with a set of features. In \cite{culotta2004dependency}, the features are selected as in Table ~\ref{ftable}. 


\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline \bf Index & \bf Feature Name & \bf Example \\ \hline
1 & word & Rainsy, letter \\
2 & part-of-speech tag & NNP, VBD \\
3 & general POS tag & noun, verb \\
4 & chunk tag & NP, VP \\
5 & entity type & person, location \\
6 & entity level & name, nominal \\
7 & WordNet hypernyms & food, state \\
8 & relation argument & ARG-A, ARG-B \\
\hline
\end{tabular}
\end{center}
\caption{\label{ftable} Feature Set of The Dependency Tree Kernel in \cite{culotta2004dependency}. }
\end{table}

We used this dependency tree kernel to compute sentence similarity in multi-document summarization. Hence, we adapted their feature set for our task. The feature set used in this study and their corresponding weights are shown in Table ~\ref{f2table}. Figure ~\ref{fig:example_dt} shows dependecy trees and corresponding feature vectors of two example sentences. 

\begin{figure*}
        \centering
        \begin{subfigure}[b]{0.6\textwidth}
                \fbox{\includegraphics[width=\textwidth]{dt1.png}}
                \caption{Dependency tree of the sentence {\it ``Rainsy said Wednesday.''} }
                \label{fig:dt1}
        \end{subfigure}%
        \\
        \begin{subfigure}[b]{0.7\textwidth}
                \fbox{\includegraphics[width=\textwidth]{dt2.png}}
                \caption{Dependency tree of the sentence {\it ``He uttered her name.''}}
                \label{fig:dt2}
        \end{subfigure}
        \caption{Dependency trees and their corresponding feature vectors of two example sentences. Feature vectors are in this format: [{\it word itself, general POS tag, POS tag, chunk tag, entity type, hypernym set}]. }\label{fig:example_dt}
\end{figure*}


In \cite{culotta2004dependency}, for any entity pairs in a sentence, the authors find the smallest common subtree that contains both entities, and compute the kernel similarity using these subtrees to reduce the effect of irrelevant words. For the task of multi-document summarization, we have utilized the dependency tree kernel on dependency trees of whole sentences. 

\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline \bf Index & \bf Feature Name & \bf Weights \\ \hline
1 & word & 5 \\
2 & part-of-speech tag & 0.01 \\
3 & general POS tag & 0.01 \\
4 & chunk tag & 0.01 \\
5 & entity type & 5  \\
6 & WordNet hypernyms & 0.01  \\
\hline
\end{tabular}
\end{center}
\caption{\label{f2table} Feature Set of The Dependency Tree Kernel used in this study. }
\end{table}

For two dependency trees with root nodes $r_1$ and $r_2$, the similarity kernel is defined as follows by Culotta and Sorensen \shortcite{culotta2004dependency}:

\small
\begin{equation}
\mathbf{
  K(r_1, r_2)} = \left\{
\begin{array}{c l}     
    0 \hspace{48pt} if & \mathbf{match(r_1, r_2)} = 0\\
    \mathbf{sim(r_1, r_2)}+\\
\mathbf{K_c(c(r_1), c(r_2))} &  otherwise
\end{array}\right.
\end{equation}
\\
\normalsize
$K_c$ kernel is the kernel function defined over children of nodes $t_i$ and $t_j$:

\small
\begin{equation}
\mathbf{K_c(c(t_i), c(t_j))} =\\
 \sum_{u \in c(t_i)}^{n} \sum_{v \in c(t_j)}^{m} \lambda^{n+m} \times \mathbf{K(u, v)} 
\end{equation}
\normalsize
where \textit{c(t)} denotes the children of node \textbf{\it t}, and \textbf{\it n} and \textbf{\it m} are the number of children of $t_i$ and $t_j$, respectively.
\\
\newline
Matching function {\it match} is a function defined on \{0,1\} where it takes value of 1 if matching feature set of node $t_i$, $\mathbf{f_{m_i}}$ equals matching feature set of node $t_j$, $\mathbf{f_{m_j}}$, and 0 otherwise:

\small
\begin{equation}
  \mathbf{match(t_i, t_j)} = \left\{
\begin{array}{c l}     
    1, & if \hspace{6pt} \mathbf{f_{m_i}} = \mathbf{f_{m_j}}\\
  0, & otherwise
\end{array}\right.
\end{equation}

\normalsize
and similarity function {\it sim} simply counts the number of common feature values of two nodes and is defined as follows:

\small
\begin{equation}
\mathbf{sim(t_i, t_j)} =\\
 \sum_{k} F(\mathbf{f_i[k]}, \mathbf{f_j[k]})
\end{equation}

\normalsize
where,

\small
\begin{equation}
  F(\mathbf{f_i[k]}, \mathbf{f_j[k]}) = \left\{
\begin{array}{c l}     
    1, & if \hspace{6pt} \mathbf{f_i[k]} = \mathbf{f_j[k]}\\
  0, & otherwise
\end{array}\right.
\end{equation}
\\
\newline
\normalsize
In our study, matching feature is set as {\it feature-3} in Table ~\ref{f2table}. 



\subsection{Dependency Tree Trigram Kernel}


Dependency Tree Trigram kernel was proposed by Choi and Kim \shortcite{choi2013social} for the task of social relation extraction. The main idea of this kernel is to calculate similarity between two target sentences with the help of trigram units which are created from the dependency trees of the corresponding sentences.


For the following two sentences, ``\textit{He lost her}'' and ``\textit{He lost her keys}'', let us illustrate the process of creating dependency trigrams. Dependency trees created from the sentences are depicted in Figure ~\ref{fig:sample}.

\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=0.45\textwidth]{dtsample.png}}
\caption{A: Typed dependency tree of {\it ``He lost her.''}  B: Typed dependency tree of {\it ``He lost her keys.''} }\label{fig:sample}
\end{figure}




In order to form trigrams from a dependency tree, following rules should be satisfied: For a trigram unit $w_i$ $\rightarrow$ $w_k$ $\leftarrow$ $w_j$,


\begin{itemize}


\item $w_i$ and $w_j$ should be the children of $w_k$.


\item index $i$ should be less than index $j$.


\end{itemize}


Dependency trigrams that can be generated from these sentences are,


\begin{itemize}


\item \{ He $\rightarrow$ lost $\leftarrow$ her \} and,


\item \{He $\rightarrow$ lost $\leftarrow$ keys, He $\rightarrow$ lost $\leftarrow$ her \}.


\end{itemize}



These trigrams will be the inputs of the kernel function given below.


\small

\begin{equation}
\resizebox{.5 \textwidth}{!}
{
$ \mathbf{K(X,Y)= \frac{\sum_{i=1}^{n} max(s(X^i_T, Y^1_T), s(X^i_T, Y^2_T),..., s(X^i_T, Y^m_T))}{n}} $
}
\end{equation}
\\
\normalsize
where $X$ and $Y$ are the trigram sets for two different sentences, \textbf{n} and \textbf{m} refers to number of trigrams in $X$ and $Y$ respectively and $X^i_T$ refers to $i^{th}$ trigram of $X$ and $Y^j_T$ refers to $j^{th}$ trigram of $Y$. \textbf{s} is the score function which is used to compute similarity between two trigram units.

\small
\begin{equation}
\begin{array}{c l}
\mathbf{s(X^i_T, Y^j_T)}  = & \mathbf{\sum_{q=1}^{p} \alpha_q N_q (X^i_{T_{left}}, Y^j_{T_{left}})} \\
\space & \space \\
 \space & \mathbf{ \times \sum_{q=1}^{p} \alpha_q N_q (X^i_{T_{center}}, Y^j_{T_{center}})} \\
\space & \space \\
\space &  \mathbf{ \times \sum_{q=1}^{p} \alpha_q N_q (X^i_{T_{right}}, Y^j_{T_{right}})}
\end{array}
\end{equation}
\\
\normalsize
where $p$ is the number of features that we consider for each node (such as {\it POS tags, entity types} etc.) and,

$N_q(X^i_{T_x}, Y^j_{T_x})$ is a binary function which results in 1 when the $q^{th}$ attributes of the nodes match. $\alpha$ is the weight variable that is assigned for each attribute of a node which decides the effect of that attribute on the overall score. We used the first five feature types in Table ~\ref{f2table} for this kernel.



One of the advantages of the trigram model is that it allows a length-free comparison of the target sentences. Since sentence similarity is computed over the pairwise similarity of trigram units which have the same path length of three nodes, comparisons become easier.



The weaknesses of the dependency tree trigram kernel can be listed as follows;


\begin{itemize}


\item given two sentences which convey completely different ideas but share some identical words, we may have trigram matches which will affect the overall similarity score,


\item the features that we use can be still weak to capture the similarity between two nodes, therefore this will affect the overall success of the kernel.


\end{itemize}


\subsection{A Similarity Function Using Types of Dependency Relations}

\normalsize
The Dependency Tree Kernel and the Dependency Tree Trigram Kernel omit the type of dependencies in a sentence. These two kernels treat all dependencies in a dependency tree as having equal importance. However, there are different types of dependency relations in a tree and not all of them are equally important. To give an example, a dependency relation that shows the dependency of the subject of a sentence on a verb  is more semantically significant  than a dependency relation which indicates the determiner of a word. This situation is visible in Figure ~\ref{fig:erptsqfit}.

\begin{figure*}[htp]
\centering
\fbox{\includegraphics[width=1\textwidth]{first_sentence_typed.png}}
\caption{Typed dependency tree of {\it ``Sam Rainsy said Wednesday that he was unsatisﬁed with the
guarantee.''} {\it \textbf{nsubj}} dependency is more informative than {\it \textbf{det}} dependency.}\label{fig:erptsqfit}
\end{figure*}

Hence, typed dependency grammar of sentences is more suitable in the task of text summarization in terms of finding parts of sentences which are semantically more informative. 

Based on this motivation, we developed a new method that makes use of typed dependency grammars of sentences to compute similarity score of sentences. As a first attempt, we utilized only object and subject dependency relations and observed the performance of the method. 
Our typed dependency similarity function is basically defined as follows: 

For two sentences $S_1$ and $S_2$, let $D_1$ and $D_2$ be the set of subject and object dependencies of $S_1$ and $S_2$, respectively. Then, the similarity between $S_1$ and $S_2$ is,

\small
\begin{equation}
\begin{array}{c l}     
      \hspace{30pt} &  \textit{for each dependency } \\
 K(S_1, S_2) = \mathbf{sim(d_1, d_2)} & d_1 \in D_1 \textit{and dependency }  \\
\hspace{30pt} & d_2 \in D_2 \textit{of the form } \\
\hspace{30pt} & type(head, dependent)\\
\end{array} 
\end{equation}
\\
\normalsize
We introduce two row vectors $\mathbf{g}$ and $\mathbf{d}$ which are defined as below:
\\
\newline
$\mathbf{g = [ w_1, w_2, \cdots, w_n, w_{n+1} ]}$, 
\\
\newline
where $i^{th}$ element shows the prize to be given when  $i^{th}$ feature of {\it head} of $d_1$ equals  $i^{th}$ feature of {\it head} of $d_2$, $i \in \{1,2,...,n\}$, and $w_{n+1}$ is the prize to be given when no corresponding features of {\it heads} of $d_1$ and $d_2$ are equal. 
\\
\newline
The vector $\mathbf{d}$ has the same structure with $\mathbf{g}$, but here, the weights are for equivalence of corresponding features of {\it dependents}. 
\\
\newline
Then,  $\mathbf{sim(d_1, d_2)}$ is defined as,

\small
\begin{equation}
\mathbf{sim(d_1, d_2)} = \sum_{i,j} ((\mathbf{g}^\mathbf{T} \circ \mathbf{d})  \odot \mathbf{S})_{ij}
\end{equation} 
\normalsize
where $\mathbf{S}$ is the selection matrix whose ${i,j}^{th}$ element is defined by $\mathbf{m}$ function as follows:

\small
\begin{equation}
  \mathbf{m(i,j)} = \left\{
\begin{array}{c l}     
    1, & if \hspace{6pt} \mathbf{f}_{d_{1}-head}\mathbf{[i]} =  \mathbf{f}_{d_{2}-head}\mathbf{[i]} \\
                    \hspace{36pt}  & and  \hspace{6pt} \mathbf{f}_{d_{1}-dep}\mathbf{[j]} =  \mathbf{f}_{d_{2}-dep}\mathbf{[j]}\\
    \space & \space \\
  0, & otherwise
\end{array}\right.
\end{equation}
\\
\newline
\normalsize
where $\mathbf{f}_{d_{1}-head}\mathbf{[i]}$ denotes  $i^{th}$ feature of {\it head} of $d_1$ and $\mathbf{f}_{d_{1}-dep}\mathbf{[j]}$ denotes  $j^{th}$ feature of {\it dependent} $d_1$.
 \\
\newline
We provide an example to demonstrate how the method works:
\\
\newline
Let $S_1$ be the sentence {\it ``The president complained in a letter.''} and $S_2$ be the sentence {\it ``The president shouted.''}
\\
\newline
Their corresponding dependency grammars are returned by {\it Stanford Parser} as follows:
\\
\newline
{\it \textbf{Dependency grammar of $S_1$}}:
\\
$ \hspace{12pt} \\
det(president, The)\\
nsubj(complained, president)\\
root(ROOT, complained)\\
det(letter, a)\\
prep in(complained, letter)\\
 \hspace{12pt} $
\\
{\it \textbf{Dependency grammar of $S_2$}}:
\\
$ \hspace{12pt} \\
det(president, The)\\
nsubj(shouted, president)\\
root(ROOT, shouted)\\
 \hspace{12pt} $
\\
Then, {\it subject-object} dependency sets of $S_1$ and $S_2$ are as follows:
\\
\newline
$D1 = \{ nsubj(complained, president) \}$
\\
$D2 = \{ nsubj(shouted, president) \}$
\\
\newline
Both $D_1$ and $D_2$ have only one dependency element. So, $d_1$ is  $nsubj(complained, president)$ and $d_2$ is $nsubj(shouted, president)$.  
\\
\newline
Let the feature set is composed of {\it feature-1} and {\it feature-6} in Table ~\ref{f2table} as we set so in our experiments for this similarity function.

We find the feature vectors of {\it heads} and {\it dependents} in $d_1$ and $d_2$:
\\
\begin{itemize}
 \item $d_1$ {\it head} ``complained''  :
\\
\lbrack $f_1$: {\it complained, $f_6$: \{charge\}} \rbrack
 \item $d_1$ {\it dependent} ``president'' :
\\
\lbrack  $f_1$: {\it president,  $f_6$: \{corporate executive, business executive, head of state \}} \rbrack 

\item $d_2$ {\it head} ``shouted'' : 
\\
\lbrack  $f_1$: {\it shouted,  $f_6$: \{talk, speak, utter, verbalize \}} \rbrack

\item $d_2$ {\it dependent} ``president'' : 
\\
\lbrack  $f_1$: {\it president,  $f_6$: \{corporate executive, business executive, head of state \}} \rbrack
\end{itemize}

Let the predefined $\mathbf{g}$ and $\mathbf{d}$ vectors be as follows:
\\
\newline
$\mathbf{g} =$ \lbrack $10, 3, 0.5$\rbrack \qquad which means prize of equivalency of {\it heads} is 10, prize of overlapping {\it hypernyms} of {\it heads} is 3, and prize of no features of {\it heads} being matched is 0.5.
\newline
\newline
$\mathbf{d} =$ \lbrack $100, 10, 0.5$\rbrack \qquad which means prize of equivalency of {\it dependents} is 100, prize of overlapping {\it hypernyms} of {\it dependents} is 10, and prize of no features of {\it dependents} being matched is 0.5.
\\
\newline
Now we create the selection matrix $\mathbf{S}$.
\\
\newline
Remember that, $\mathbf{m(i, j)}$ returns 1 if $i^{th}$ feature of {\it head} of $d_1$ is equal to $i^{th}$ feature of {\it head} of $d_2$, and $j^{th}$ feature of {\it dependent} of $d_1$ is equal to $j^{th}$ feature of {\it dependent} of $d_2$. 
\\
\newline
So, $\mathbf{S}$ matrix is equal to:
\\
\begin{equation}
\mathbf{S} = 
\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
1 & 0 & 0
\end{bmatrix}
\end{equation}
\\
Because $1^{st}$ features of {\it dependents} of $d_1$ and $d_2$ are matching and none of the features of {\it heads} are matching.
\\
\newline
Then, the similarity between $S_1$ and $S_2$ is calculated using the following formula:
\\
\newline
\small
$\mathbf{K(S_1, S_2) = sim(d_1, d_2) = \sum_{i,j} ((\mathbf{g}^\mathbf{T} \circ \mathbf{d})  \odot \mathbf{S})_{ij}   = 50}$.
\normalsize
