\documentclass[journal]{IEEEtran}

\begin{document}
\title{Comparison of text sets using Data Mining and Similarity Measure Methods}

\author{Alfredo L. Foltran, Ana Ozaki Rivera, Fernando M. Mendon\c ca, Jacir L. Bordim}

\maketitle

\begin{abstract}
One of the main subjects of data mining is data analysis. With the increasing amount of data, or more precisely, text available on the Internet, choosing what will be transmitted to the user is a great challenge. This article uses different similarity measure methods available in order to analyze sets of summarized texts. These will be generated through the usage of different summarization algorithms. 

\end{abstract}

\section{Introduction}  
In this article, three algorithms will be used to summarize a set of texts. All of them give an extract of the original text. These algorithms will be explained in the following section. Then another three algorithms will be used to measure the similarity between the automatic extracts and the manual abstracts. The generated results will be used to decide which of the extraction algorithms is the best one in terms of generation speed and accuracy.

Summarized texts may be an extract or an abstract of the original text. An extract is obtained by the selection of the sentences which have the main idea of the text. An abstract, on the other hand, contains the main ideas of the original text rewritten in a new text. In the automatic approach this technique is more complex as it involves artificial intelligence. In order to decide if an extract or abstract of a text is consistent with the original text, a large number of comparisons must be made between the text and it's original. To make these comparisons, similarity measures are used.

There are only a few summarization solutions available. As free software, the best one is the Open Text Summarizer \cite{OTS}, implemented with English language support. Other possibilities were implemented by the NILC (N\' ucleo Interinstitucional de Lingu\' istica Computacional). The most used is the GistSumm. It was developed to be used with the Portuguese language but as of today it's code is proprietary. 

This article is divided in seven sections. On the second section, the extraction algorithms and it's characteristics are explained. On the third section, the article presents the different methods used to decide whether two texts are similar, and how similar they are. The fourth section explain the implementation made on the OTS code, and the fifth explains the performed tests. The sixth section presents the test results in terms of similarity and execution speed performed by each one of the algorithms proposed. The last section concludes the article in terms of relevance and contribution to the summarization research, more specifically it's contribution to the summarization of Brazilian Portuguese texts. 

\begin{center}
Brasilia \\
June 10th, 2008
\end{center}

\section{Extraction algorithms}
The extraction algorithms are mainly based on the fact that texts have always a central idea behind it. That idea can be identified through one or more sentences of the original text. To identify such central sentences, statistics methods can be used to value each one of the sentences in the text. After that the sentences with the highest values are selected to be in the final text. What differs one algorithm from another is actually how the sentences are valued and how the words are manipulated.

Besides these differences, the main algorithm assumes that the subject of a text is the list of ideas that are most discussed in it. First, the text is scanned and a list of all of the words and their occurrence in the text is created. This list is sorted by the occurrence of words in the text. Then, all the language stopwords are removed from the list. Stopwords are words that, in general, do not give information about the subject of the text, such as articles, prepositions and pronouns. In this article this language used is Brazilian Portuguese.

From the resultant list it assumes that the text talks about the words that appears with more frequency, the keywords. So an important sentence in the text will be a sentence that talks about them. All the sentences of the text are segmented. Each sentence receives a grade based on the keywords in it. A line that holds many important words will be given a high grade. To produce a 20\% extract, 20\% of the sentences with the highest grades are printed.

\subsection{Keyword}
This method value a sentence based on the number of keywords present in it. The value may be how many times the keyword appears on the sentence or the sum of the weights given to each of the keywords. The keywords can be obtained in different ways, such as manually, by counting, corpus frequency or frequency on the text \cite{RelatoriosNucleoInterinstitucional}.

The weights given to the words present on the text are actually associated to an identifier which represents the root of that word. To identify this root some techniques may be used as mentioned on \ref{MET-ROOT}. 
Once the sentence is valued, it can also receive a multiplier, based on the structure of the text. For example, we may assume that the title of the text, or it's first sentence is normally where we can find the central ideas of the text.

\subsubsection{Valuing Methods}
The selection of the words which will compose the list of keywords is done through the words with the highest values, with the exception of the \emph{stopwords}. The words are manipulated in order to transform them in their roots. For example, the word "running" is first transformed into "run". Then if these two words appear in the text they will be counted as if they were the same.

\paragraph{Count}
\label{COUNT}
In this method the value of the sentence is the sum of the occurrences of it's keywords in the hole text. A reference Corpus may be used in the calculation. In this case, the keyword's value is based not only in it's frequency but in the {\it corpus} too. High values are given to words that have many occurrences in the text and few occurrences in the reference {\it corpus}.

\paragraph{Frequency in the Corpus}
\label{FREQUENCY-CORPUS}
The {\it Corpus} Frequency method uses the statistics of a reference {\it corpus}. The idea is the same as in the Counting method. This method uses the {\it Term Frequency - Inverse Document Frequency} (TF-IDF) equation, as shown below:

\begin{equation}
tfidf = tf \times idf.
\label{EQ-TFIDF}
\end{equation}
The {\it Term Frequency} equation is defined by:

\begin{equation}
tf = \frac{w_i}{\sum_k w_k}, 
\label{EQ-TF}
\end{equation}
where $p_i$ is the word's frequency in the text. The divisor is the number of words in the text. The {\it Inverse Document Frequency} equation is explained bellow:

\begin{equation}
idf = \log{\frac{|D|}{|d_i \supset w_i|}}, 
\label{EQ-IDF}
\end{equation}
where $|D|$ represents the total number of documents in the {\it corpus} and the divisor is the number of documents in which the word appears.

As mentioned above, high values are given to words that have many occurrences in the text and only a few in the reference {\it corpus}. With this approach, common words are eliminated.

\paragraph{Frequency in the text}
\label{TF-ISF}
In this method the sentence's value is the average of it's keyword's values. The main difference between this method and the previous one is that it doesn't need a corpus frequency as each keyword's frequency is analised in the domain of the text.
This method uses the {\it Term Frequency - Inverse Sentence Frequency} (TF-ISF) equation. 

\begin{equation}
tfisf = tf \times isf,
\label{EQ-TFISF}
\end{equation}
where $tf$  is the number of occurrencies of the word in the sentence. $isf$  is given by the following formula:

\begin{equation}
isf = \log{\frac{|S|}{|s_i \supset w_i|}},
\label{EQ_ISF}
\end{equation}
where $|S|$ is the total number of sentences in the text and the denominator is the number of sentences where the word occurs.

\vspace{2mm}

\subsubsection{Methods to identify the root of a word}
\label{MET-ROOT}
As mentioned before, the values are not given to each word, but to each root and the words are associated with it's correspondent root. The effect is that all the variations of a word are counted as the root, for statistic purposes.

There are more then one method to identify the root of a word. Some of them will not tell the precise root but an identifier that acts like the root inside a sentence. Following are the methods used on the algorithms.

\paragraph{Lexical Dictionary}
\label{LEXIC-DICT}
The lexical dictionary is the set of words, the vocabulary of a language, in which it is possible to identify the roots of the words used on a text. It is mainly based on the words, it's derivates and synonymies.

\paragraph{N-Gram}
\label{N-GRAM}
The usage of n-grams is simple and efficient. It is often a good alternative to the Latin originated languages which normally contain a large number of word variations making it difficult to use rules dictionaries. This technique is based on the truncation of the words in a number of letters that depend on the size of that word. This technique can also identify some suffixes that can be removed from the words before they get truncated, making the process more accurate. For more information regarding N-Gram see \cite{SUMAutomatica}.

\paragraph{Rules Dictionary}
\label{RULES-DICT}
This dictionary has substitution rules that transforms most of the words in it's root. This method does not works perfectly in natural languages, though it works very well on structural languages. Some possible rules are presented bellow: 

\begin{list}{$\bullet$}{}
\item prefix deletion;
\item suffix deletion;
\item synonyms;
\item manual substitution.
\end{list}

It is possible to define that every word ending with {\it mente} would have this suffix removed, but not every word ending with {\it mente} is an adverb, like {\it semente}, for example. Most of the rules have an exception, which makes them difficult to define. Because of the rules processing this method is more time-consuming than the other ones presented.

\section{Measures of Similarity and Dissimilarity}
Similarity and Dissimilarity measures represent how comparable objects are similar. The objects must be compared among the same attributes. Some attributes are present in just a few objects of a data set. As they assume zero values in most of the cases, they are called asymmetric. There are measures specially built to deal with the asymmetric property of some attributes. They are explained bellow. 

\vspace{2mm}

\subsection{Jaccard Similarity Coefficient}
\label{JACCARD}
Jaccard Similarity Coefficient measure is used to handle asymmetric binary attributes as only non-zero values are relevant for the calculation. It's formula is defined by:

\begin{equation}
J = \frac{\mid{\ R_1\ }\bigcap {\ R_2\ } \mid}{P}, 
\end{equation}
where $R_1$ is the set of attributes of one of the objects being compared and $R_2$ is the set of the other object. So the numerator represents the intersection between sets $R_1$ and $R_2$ and the denominator, $P$, is the number of attributes present at least in one of the objects.

\subsection{Cosine Similarity}
\label{COS}
Cosine Similarity measure is used to represent objects with different frequencies of it's attributes. Documents are an example of objects that may have different frequencies of it's attributes, words. Like Jaccard coefficient, Cosine measure only considers attributes that are present at least in one of the two objects being compared. In the documents example, all the words would be the set of attributes, but for each document most of them would be zero valued. If the 0-0 matches were considered then documents in general would be highly similar. The cosine similarity is defined by:

\begin{equation}
\cos(x,y) = \frac{x \cdot y}{\parallel x \parallel \parallel y \parallel}.
\end{equation}

\subsection{Recall}
\label{RECALL}
In similarity context, the Recall measure is based on the number of matched attributes between two objects divided by the number attributes of one of them. When the number of possible attributes is too large, words for example, only a set of relevant attributes may be considered. Recall is defined by the following formula:

\begin{equation} 
\frac{\mid{\ R_1\ }\bigcap {\ R_2\ } \mid}{Q}, 
\end{equation}

where $R_1$ is a set of the relevant attributes of one of the objects and $R_2$ is a set of the relevant attributes of the other object. $Q$ is the number of elements of $R_1$. In this experiment, the relevant attributes are a set of keywords. So, recall makes a comparison between two sets of keywords, one for each object being compared. From this comparison, it takes the number of matching attributes and then makes a division by \textit{S}.

\section{Implementation}
In order to be able to use a tool that is both flexible and free, with Portuguese support, the following methodology was adopted:

\subsection{Reuse and adaptation of the OTS code}
One of the advantages of studying the Open Text Summarizer is that it is open source. For that reason, the authors of \cite{SUMAutomatica} were able to alter it's source code in order to adapt it to the Brazilian Portuguese language and prepare it's output to be analyzed by the similarity measure methods used.
  

\subsection{Creation of a Brazilian Portuguese rules dictionary (without verbs)}
\label{CREATION-RULES-DICT}
As mentioned before, the OTS had to be adapted in order to use the Brazilian Portuguese rules dictionary. Until version 0.4.2, OTS implements only one keyword ranking method, based on keyword count. Although it includes a Brazilian dictionary, this dictionary is incomplete and contains errors such as non Portuguese words and erroneous punctuating rules. 

The following rules were inserted onto the rules dictionary:

\begin{itemize}
\item Punctuating rules including sentences finalization and it's exceptions. 
\item Rules to find the root of the words.
\item A more complete list of stopwords for the Brazilian language, including pronouns, link verbs and auxiliary verbs.
\item Common adjective synonyms.
\item Gender and number rules.
\end{itemize}

\subsection{Methods}
All the implemented methods are specified on the table below:

\begin{table}[ht]
\begin{center}
\begin{tabular}{c | c | c}
Implementation & Keyword & Root \\
\hline
GST & Count & Lexic Dictionary \\
OTS & Count & Rules Dictionary (without verbs) \\
NGOTS & TF-ISF & N-Gram \\
\end{tabular}
\end{center}
\caption{Implemented Algorithms}
\label{TAB-ALG}
\end{table}

As the table points out, GST refers to the utilization of the GistSumm tool. OTS refers to the Open Text Summarizer and NGOTS refers to the use of our version of OTS, with the implementation of the N-Gram method. The Count and TF-ISF keyword extraction tecniques are explained on paragraphs \ref{COUNT} and \ref{TF-ISF}. Also, Lexic Dictionary, Rules Dictionary (without verbs) and N-Gram were the root extraction tecniques used as explained earlier.

\subsection{Comparison tests realization}
In order to prepare a set of texts to the similarity measure, a hundred texts in portuguese were gathered. More details can be found on section \ref{TESTS}. Then, Brazilian Portuguese specialists were asked to make summaries from the texts gathered and those texts were considered optimum results. Finally, the original set of texts was used as input to the summarization methods mentioned and then each text was compared to it's corresponding model, using the three similarity measure methods. 

The results in terms of similarity were gathered on a table, for each one of the methods used. The tables are explained on the next section.

\section{Performed Tests}
\label{TESTS}
In order to be able to compare the summarization methods explained above, a set of texts was gathered amongst five categories, containing twenty texts each. These categories were:

\begin{itemize}
\item Politics,
\item World,
\item International,
\item Opinion,
\item Special.
\end{itemize}

Those texts were all gathered from large Brazilian news publications (for more information see \cite{SUMAutomatica}). For each of these texts a summary was made by a language specialist, which was then considered the best result. As was mentioned before, a summary is basically a reduction of a text with the use of words slightly different then those used on the original text, with the objective of keeping the text's sense and coherence. 

Simultaneously, extracts were created automatically by the methods mentioned and explained above from the same set of texts. Both the manual summaries and the automatic extracts were prepared to have half the size of the corresponding original texts. 

The automatic extracts were then compared with the manual summaries. For that, scripts were created to make the comparisons and prepare a log containing the similarity between two corresponding texts, for each one of the hundred texts. 

Finally, all the similarity values were gathered in a spreadsheet, which computed the statistic values for the comparison.

\subsection{Comparison scenery}
The automatic extracts were compared with the manual summaries using a list of the most frequent words from each one of the texts, which were generated by the OTS, using the -k option(for more information see the OTS manual \cite{OTS}). These words were chosen after the stopwords were excluded and the words which appeared on the text were ordered by appeareance frequency.

The set of frequent words from each text was then compared as a binary vector using the similarity measure methods explained on sections \ref{RECALL}, \ref{JACCARD} and \ref{COS}.

\section{Tests Results}
This section is dedicated to the results obtained through the datasheet mentioned on the previous section. 

\begin{table}[ht]
\begin{center}
\begin{tabular}{c | c c c}
 & GST & OTS & NGOTS \\
\hline
Average & 26.45\% & 26.55\% & 30.5\% \\ 
Deviation & 12.62\% & 11.89\% & 13.23\% \\
Best Value & 60\% & 60\% & 70\% \\
Speed & 4.26x & 1x & 0.56x \\
\end{tabular}
\end{center}
\caption{Recall similarity results}
\end{table}

\begin{table}[ht]
\begin{center}
\begin{tabular}{c | c c c}
 & GST & OTS & NGOTS \\
\hline
Average & 15.91\% & 15.93\% & 18.78\% \\ 
Deviation & 8.81\% & 8.14\% & 8.01\% \\
Best Value & 42.86\% & 42.86\% & 53.85\% \\
Speed & 4.26x & 1x & 0.56x \\
\end{tabular}
\end{center}
\caption{Jaccard similarity results}
\end{table}

\begin{table}[ht]
\begin{center}
\begin{tabular}{c | c c c}
 & GST & OTS & NGOTS \\
\hline
Average & 41.07\% & 41.98\% & 45.37\% \\ 
Deviation & 17.4\% & 17.86\% & 17.71\% \\
Best Value & 74.02\% & 76.24\% & 72.17\% \\
Speed & 4.26x & 1x & 0.56x \\
\end{tabular}
\end{center}
\caption{Cosin similarity results}
\end{table}

The tables above shows the results relative to each of the similarity algorithms explained on sections \ref{JACCARD}, \ref{COS} and \ref{RECALL}. The summarizers  with their root and keyword extraction tecniques are shown on \tablename~\ref{TAB-ALG}. The results are given in percentages of matchings between the manual abstracts keywords and the automatically generated extracts. As there were produced for each algorithm one hundred results, the tables provide the average of the similarity percentages, deviation and the best value achieved. In all the cases the worst value was zero so it wasn't presented on the table. The Speed values were normalized using as reference the OTS method which appears with speed 1x.

\section{Conclusion}
As all the tests were called from a single test suite, results could be quickly prepared over a large set of texts, once those were placed together. One of the most relevant difficulties encountered was the choice of the best assessment method, and other internal issues related to summarization \cite{SummarizationEvaluationMethods}.

As can be visualized on the tables, the NGOTS method is the one with the best results. Besides being the one with the highest similarity results, it is the one that can summarize texts spending less time than the others.

As the attributes being analyzed may appear more than once, the best method to evaluate the similarity between the two sets of texts is the Cosine method. The reason is that it is the only one of the three methods that takes the frequencies in which the attributes appear in account. 

\begin{thebibliography}{1}

\bibitem{DATAMiningIntro}
Pang-Ning Tan, Michael Steinbach and Vipin Kumar, \emph{Introduction to Data Mining}.

\bibitem{SUMAutomatica}
Alfredo Luiz Foltran Filho, Jacir Bordim and Ricardo Jacobi, \emph{Sumariza\c c\~ ao Autom\' atica de Textos Web em Servidores de P\' aginas para Dispositivos M\' oveis}.

\bibitem{AnaisEncontroInteligenciaArticial}
Thiago Alexandre Salgueiro Pardo, Luciana Helena Machado Rino and Maria das Gra\c cas Volpe Nunes. Neuralsumm: Uma abordagem conexionista para a sumariza\c c\~ ao autom\' atica de textos. \emph{Anais do IV Encontro Nacional de Intelig\^ encia Artificial, 2003}.

\bibitem{SummarizationEvaluationMethods}
Hongyan Jing, Kathleen McKeown, Regina Barzilay and Michael Elhadad. Summarization evaluation methods: Experiments and analysis. \emph{Proceedings of the AAAI Intelligent Text Summarization, 1998}.

\bibitem{RelatoriosNucleoInterinstitucional}
Alice Picon Espina and Lucia Helena Machado Rino. Utiliza\c c\~ ao de m\' etodos extrativos na sumariza\c c\~ ao autom\' atica de textos. \emph{S\' eries de relat\' orios do N\' ucleo de Interinstitucional de Lingu\' istica Computacional, March 2002}.

\bibitem{SumarizadorAutomaticoTextos}
Thiago Alexandre Salgueiro Pardo. Gistsumm: Um sumarizador autom\' atico baseado na id\' eia principal de textos. \emph{S\' erie de Relat\' orios do N\' ucleo de Interinstucional de Lingu\' istica Computacional, October 2003}.

\bibitem{OTS}
Open Text Summarizer, available on the website http://libots.sourceforge.net.

\bibitem{GST}
Gist Summarizer, available on the website http://www.icmc.usp.br/\~{}taspardo/gistsumm.htm.

\end{thebibliography}

\end{document}
