\section{Introduction}

Information  overload  is  one  of  the  greatest  challenges  in recent  years,  especially  due  to  the  rapid increase of data produced on the Internet. Automatic summarization of documents is one of the most promising solutions proposed to overcome this problem.

Automatic  text  summarization  is  the  process  of  reducing  a  text  document  automatically  to  create  a summary  without  losing  the  most  important  information.  There  are  mainly  two  approaches  for  text summarization,  namely  extractive  summarization  and  abstractive  summarization.  Extractive summarization aims to select the sentences that are most representative of the original document. On the other  hand,  the  abstractive  summarization  approach  tries  to  generate  new  sentences  that  capture  the important points of the document using natural language generating methods. Most studies today, pursue extractive based approaches for text summarization. 

Multi-document  summarization  refers  to  the  task  of  automatically  generating  a  summary  of  multiple documents  about  the  same  topic.  This  problem  is  harder  than single-document  summarization  since multiple documents may contain both overlapping and contradicting information. The main tasks here are detecting and handling redundancy, while generating a coherent and complete summary \cite{li2007multi}.
 
Several methods including supervised approaches \cite{das2007survey,DBLP:conf/coling/PeiYFH12}, topic driven models \cite{nastase2008topic,hennig2009topic,wang2009multi}, and clustering based models \cite{radev2004centroid,aliguliyev2010clustering} have been proposed in the literature for the task of multi-document summarization.
Recently, graph-based summarization methods have attracted increasing attention of researchers \cite{erkan2004lexrank,wan2008multi,shen2010multi} and have been successful when compared to the other state of the art summarization approaches \cite{mihalcea2004graph}.
Graph-based methods represent documents as a graph where vertices are sentences and edges  denote similarity between two sentences. An edge is constructed between two sentences if the similarity score between two sentences is above some threshold \cite{kumar2011automatic}. LexRank \cite{erkan2004lexrank}, is one of the most salient graph-based methods for multi-document summarization. Here, the general idea is that, sentences, which have connections to many other significant sentences, are considered to be important. Like most of the other graph-based studies,  LexRank  uses  cosine  similarity  based  on  tf-idf to  measure  the  similarity  between nodes in a sentence graph. However,  these  methods  treat  sentences  as  bag  of  words  and  cannot  capture  semantically related information in sentences. This may result in generating low quality summaries. 

To address this problem, we propose to utilize dependency grammars as a similarity metric for sentences. Dependency  grammar  is  a  way  of  representing  syntactic  dependencies  between  words.  Using  this representation  method,  concepts  in  multiple  documents  and  relations  between  similar  contents  can  be found. With this approach, we aim to generate both coherent and complete summaries for multiple text documents. There have been a number of studies on using dependency grammars for summarization. Dependency parsing  has  been  used  to  find  common  information  between  sentences  in  order  to  perform  sentence fusion  \cite{barzilay2005sentence,filippova2008sentence}  and  to  detect  uninformative  parts  of  sentences  for  the  task  of  sentence  compression \cite{yousfi2008sentence,blake2007unc}.  Dependency  grammars  have  also  been  used  for  identifying  concepts  in  specific  domain terminologies  by  matching  noun  phrases  to  domain  specific  vocabularies  \cite{fiszman2004abstraction},  and  for  opinion summarization \cite{zhuang2006movie,somprasertsri2010mining}. To the best of our knowledge, none of the previous studies have used the dependency grammar concept to compute sentence similarity in a graph-based summarization approach.

 In this study, we adapted two dependency tree based similarity kernels [Ref and Ref] and created a new similarity function based on typed dependency grammars to be used in replace of tf-idf based cosine similarity method of LexRank multi-document summarization system. 

The organization of the paper is as follows. In the next section, we explain the dependency tree based similarity kernels used in this study. In section 3, data sets and experimental setup is explained.  We provide comparison of the results for the cases LexRank’s original similarity method and dependency-grammar based methods in the task of multi-document summarization. Finally, we give the conclusion of this study.   
 
