\section{Experiments and Results}


\subsection{Dataset and Tools}


DUC 2004 Task-2 dataset was used in our experiments (\url{http://duc.nist.gov/duc2004/}). The dataset comprises 50 English document clusters that are curated from AP and New York Times newswire.


For evaluation purposes, ROUGE (Recall Oriented Understudy of Gisting Evaluation) automated summary evaluation metric, which was developed by Chin-Yew Lin, was used

(\url{http://www.berouge.com/}). It basically performs automatic n-gram matching to accomplish its task.


In our experiments, we made use of several different tools provided by Stanford NLP Processing Group. We used Stanford Parser \cite{Klein:2003:AUP:1075096.1075150}, which is a Java coded package, for creating dependency grammars and for finding feature values in Table ~\ref{f2table}. We also used JAWS (Java API for WordNet Searching) \cite{spell2009java} to extract hypernym sets of a word from WordNet database \cite{miller1995wordnet} in order to be used as a feature.


We utilized Dragon Toolkit, which is a development package for IR and Text Mining use \cite{Zhou07dragontoolkit:}. It is provided by Drexel University for academic purposes (\url{http://dragon.ischool.drexel.edu/}). We made use of this toolkit when performing multi-document text summarization using LexRank method.



\subsection{Experiment}



Erkan and Radev \shortcite{erkan2004lexrank}, evaluated their LexRank method inside MEAD summarization system \cite{radev2004mead} where they combine LexRank method with position and length features, and use Cross-Sentence Informational Subsumption (CSIS) reranker \cite{radev2004centroid}. However, Dragon Toolkit does not provide position feature and CSIS reranker for text summarization task. In order to create a similar experimental setup with \cite{erkan2004lexrank}, we implemented them and integrated into the reimplementation of LexRank system provided by Dragon Toolkit.

Dragon Toolkit provides original LexRank's tf-idf based cosine similarity kernel. To test our kernel methods and similarity function that utilize dependency tree structure, we simply replace cosine similarity function with the kernels we proposed in order to calculate sentence similarity. We implemented dependency tree kernel and dependency tree trigram kernel and also developed a similarity function where we used the types of dependency relationships.


The pre-processing steps of the experiment are as follows:

\begin{itemize}

\item We first created dependency grammars of the sentences in our dataset using Stanford Parser, we then created dependency trees for each of the sentences using these grammars.

\item Then, for each node in the tree, set of corresponding features in Table ~\ref{f2table} is created. (We mainly make use of Stanford NLP Toolkit and JAWS in this step).

\end{itemize}


After these pre-processing steps, we run the LexRank multi-document summarization method with the parameter set shown in Table ~\ref{table3}. In sentence similarity calculation step, we sequenctially try all three dependency tree based similarity methods as well as original tf-idf based cosine similarity method.

We also implemented Lead-based summarization approach, which selects sentences by using only Position feature \cite{erkan2004lexrank}, as the baseline approach.


\begin{table}[h]

\begin{center}

\begin{tabular}{|l|l|}

\hline \bf Parameter & \bf Value \\ \hline

LexRank Centrality Weight & 10 \\

Position Feature Weight & 1 \\

Length Feature Cut-off & 9\\

LexRank Centrality Threshold & continuous \\

Reranker Threshold & 0.5 \\

\hline

\end{tabular}

\end{center}

\caption{\label{table3} Parameter Setup of LexRank System }

\end{table}



\subsection{Results}





In this section, results of the experiments we made for different sentence similarity methods on DUC 2004 Task-2 data set are presented. These similarity methods are the Dependency Tree Kernel, the Dependency Tree Trigram Kernel, Typed Dependency Similarity Function, and TF-IDF based cosine similarity function which is the original similarity function in LexRank summarization method.

We set Lead-based summarization method as the baseline approach. Table 4 depicts the results on ROUGE-1 with maximum, minimum and average success scores.


\begin{table}[h]

\begin{center}

\begin{tabular}{|l|l|l|l|}

\hline \bf Kernel Type & \bf R1-Min & \bf R1-Max & \bf R1-Avg \\ \hline

Tf-idf Cosine & 0.3073 & 0.4069 & 0.3582 \\

DT Kernel & 0.2636 & 0.3523 & 0.3081 \\

DT Trigram & 0.2551 & 0.3417 & 0.2971 \\

Typed DT Sim & 0.2839 & 0.3697 & 0.3259 \\

Lead Based & 0.2748 & 0.3764 & 0.3259 \\

\hline

\end{tabular}

\end{center}

\caption{\label{font-table} ROUGE-1 scores for different similarity methods in LexRank system on DUC 2004 Task-2 data set. }

\end{table}



Original LexRank method with tf-idf based cosine similarity function produced the best score among all models. Though it has a simple structure, the Lead based model outperformed untyped dependency tree based kernels. The Dependency Tree Kernel and the Dependency Tree Trigram Kernel failed to achieve higher scores than the baseline approach. However, Typed Dependency Similarity Function slightly outperforms the baseline model, despite its primitive structure which utilizes only subject and object dependency relations in a dependency tree. 

